diff --git "a/2508/2508.04334.md" "b/2508/2508.04334.md" new file mode 100644--- /dev/null +++ "b/2508/2508.04334.md" @@ -0,0 +1,625 @@ +Title: Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing + +URL Source: https://arxiv.org/html/2508.04334 + +Published Time: Tue, 30 Sep 2025 02:15:42 GMT + +Markdown Content: +Noor Islam S. Mohammad Noor Islam S. Mohammad is with the Department of Computer Science, New York University Tandon School of Engineering, Brooklyn, NY, USA. E-mail: islam.m@nyu.edu + +###### Abstract + +The rapid growth of Internet of Things (IoT) devices produces massive, heterogeneous data streams, demanding scalable and efficient scheduling in cloud environments to meet latency, energy, and Quality-of-Service (QoS) requirements. Existing scheduling methods often lack adaptability to dynamic workloads and network variability inherent in IoT-cloud systems. This paper presents a novel hybrid scheduling algorithm combining deep Reinforcement Learning (RL) and Ant Colony Optimization (ACO) to address these challenges. The deep RL agent utilizes a model-free policy-gradient approach to learn adaptive task allocation policies responsive to real-time workload fluctuations and network states. Simultaneously, the ACO metaheuristic conducts a global combinatorial search to optimize resource distribution, mitigate congestion, and balance load across distributed cloud nodes. Extensive experiments on large-scale synthetic IoT datasets, reflecting diverse workloads and QoS constraints, demonstrate that the proposed method achieves up to 18.4%18.4\% reduction in average response time, 12.7%12.7\% improvement in resource utilization, and 9.3%9.3\% decrease in energy consumption compared to leading heuristics and RL-only baselines. Moreover, the algorithm ensures strict Service Level Agreement (SLA) compliance through deadline-aware scheduling and dynamic prioritization. The results confirm the effectiveness of integrating model-free RL with swarm intelligence for scalable, energy-efficient IoT data scheduling, offering a promising approach for next-generation IoT-cloud platforms. + +###### Index Terms: + +Evolutionary Algorithms; Data Scheduling; Cloud Computing; Reinforcement Learning; Metaheuristics; Resource Optimization + +I Introduction +-------------- + +The rapid evolutionary computing of Internet of Things (IoT) systems has caused an unprecedented surge in data generation, demanding scalable and latency-aware computing infrastructures within cloud and grid environments[[1](https://arxiv.org/html/2508.04334v2#bib.bib1), [2](https://arxiv.org/html/2508.04334v2#bib.bib2)]. Conventional IoT cloud architectures rely on tightly coupled compute-storage units under centralized control, which struggle to handle extreme heterogeneity, dynamic workloads, and stringent latency constraints in large-scale, data-intensive IoT deployments[[3](https://arxiv.org/html/2508.04334v2#bib.bib3), [4](https://arxiv.org/html/2508.04334v2#bib.bib4)]. These limitations often result in scalability bottlenecks, network congestion, and increased operational overhead, especially across geographically distributed or resource-constrained settings. However, IoT clusters typically replicate data blocks across multiple nodes, enabling local task execution and reducing transmission latency to enhance data reliability and parallel processing. However, emerging disaggregated cloud architectures decouple compute and storage resources via ultra-high-speed interconnects, offering modularity and independent scaling[[5](https://arxiv.org/html/2508.04334v2#bib.bib5), [6](https://arxiv.org/html/2508.04334v2#bib.bib6), [7](https://arxiv.org/html/2508.04334v2#bib.bib7)]. Consequently, such designs exacerbate challenges related to non-local data execution and heightened network data movement, which frequently become bottlenecks given the massive real-time sensing data and bandwidth constraints[[8](https://arxiv.org/html/2508.04334v2#bib.bib8), [9](https://arxiv.org/html/2508.04334v2#bib.bib9)]. + +Data-parallel processing paradigms that co-locate computation with data have emerged to mitigate these issues by minimizing network load and optimizing resource utilization[[10](https://arxiv.org/html/2508.04334v2#bib.bib10), [11](https://arxiv.org/html/2508.04334v2#bib.bib11)]. Nonetheless, achieving efficient task-to-data affinity in heterogeneous IoT clusters remains challenging due to diverse node capabilities, dynamic workloads, and energy constraints. Distributed file systems such as HDFS[[12](https://arxiv.org/html/2508.04334v2#bib.bib12), [13](https://arxiv.org/html/2508.04334v2#bib.bib13)] replicate data blocks to improve fault tolerance, exponentially increasing scheduling computational complexity. Furthermore, existing locality-based schedulers often ignore node heterogeneity, causing load imbalance and suboptimal performance[[14](https://arxiv.org/html/2508.04334v2#bib.bib14), [15](https://arxiv.org/html/2508.04334v2#bib.bib15)]. This paper introduces Sensor Cloud Computing and Data Scheduling Optimization (SCC-DSO), a context-aware, reinforcement learning-driven scheduling framework tailored for heterogeneous IoT clusters[[16](https://arxiv.org/html/2508.04334v2#bib.bib16)]. SCC-DSO formulates scheduling as a constrained min–max optimization problem to minimize makespan and maximize data locality under resource and bandwidth constraints. It leverages a kernel-based regression model to predict execution times from node and workload features integration, reinforcement learning with ant colony optimization to efficiently explore the high-dimensional scheduling space for a high-performance computing[[17](https://arxiv.org/html/2508.04334v2#bib.bib17), [18](https://arxiv.org/html/2508.04334v2#bib.bib18)]. + +Designed for latency-critical IoT applications such as autonomous driving, smart manufacturing, and industrial robotics, the proposed SCC-DSO framework introduces a robust scheduling paradigm that adapts to system heterogeneity while maintaining low latency and energy efficiency[[19](https://arxiv.org/html/2508.04334v2#bib.bib19), [20](https://arxiv.org/html/2508.04334v2#bib.bib20)]. Unlike conventional schedulers that either overlook heterogeneity or rely solely on static heuristics, SCC-DSO integrates reinforcement learning (RL) with metaheuristic optimization to support adaptive task mapping in dynamic cluster environments. Extensive experiments on a 100-node heterogeneous testbed demonstrate up to 22.4% reduction in execution time, 93.1% task–data locality, and consistently higher throughput relative to state-of-the-art baselines[[21](https://arxiv.org/html/2508.04334v2#bib.bib21), [22](https://arxiv.org/html/2508.04334v2#bib.bib22)]. + +The primary contributions of this work are fourfold: (i) development of a novel RL–metaheuristic hybrid that enables heterogeneity-aware and latency-sensitive task allocation under dynamic workloads; (ii) formulation of a constrained min–max optimization model that balances locality and latency objectives while preserving scalability; (iii) introduction of an RL-guided ant colony optimization (ACO) mechanism to support proactive task migration and data prefetching in distributed settings; and (iv) comprehensive empirical validation across varying cluster sizes, replication factors, and straggler scenarios, confirming SCC-DSO’s superiority in efficiency, adaptability, and robustness for IoT–cloud environments. Together, these contributions position SCC-DSO as a scalable solution for next-generation latency-sensitive applications. + +II Related Work +--------------- + +This work introduces efficient data scheduling and resource optimization challenges in scalable, heterogeneous IoT edge–cloud environments. However, prior research addresses these through predictive scheduling, data placement optimization, and energy-efficient orchestration, focusing on minimizing latency, balancing workloads, and reducing energy consumption. + +Predictive Scheduling Models: Early models like Reservation First-Fit with Feedback Distribution (RF-FD)[[23](https://arxiv.org/html/2508.04334v2#bib.bib23)] apply multiple linear regression to predict job completion based on historical data, performing well in semi-homogeneous clusters but lacking flexibility for non-linear workload variations. RSYNC[[24](https://arxiv.org/html/2508.04334v2#bib.bib24)] offers log-based synchronization across fog nodes but suffers throughput degradation under high load and heterogeneity. Autonomic frameworks based on MAPE-K loops[[25](https://arxiv.org/html/2508.04334v2#bib.bib25)] enhance adaptability through runtime feedback; however, they assume static task profiles and homogeneous infrastructures, limiting their use in dynamic IoT contexts. Machine learning approaches such as polynomial regression schedulers[[26](https://arxiv.org/html/2508.04334v2#bib.bib26)] improve non-linear modeling but face challenges with high-dimensional data variance, underscoring the need for more generalizable models. + +Data Placement and Locality Optimization: Data replication strategies like the rack-aware policy in Hadoop Distributed File System (HDFS)[[27](https://arxiv.org/html/2508.04334v2#bib.bib27), [28](https://arxiv.org/html/2508.04334v2#bib.bib28)] replicate data blocks locally and across racks to improve fault tolerance and reduce latency. These uniform approaches, however, often cause load imbalance in heterogeneous clusters with variable node capabilities[[29](https://arxiv.org/html/2508.04334v2#bib.bib29), [30](https://arxiv.org/html/2508.04334v2#bib.bib30)]. Recent work explores dynamic placement considering node compute capacity, yet integration with intelligent scheduling remains sparse. The SCC-DSO framework fills this gap by aligning task assignment with predictive execution time and node resource profiles, optimizing data locality and workload distribution in real-time optimization. + +Energy-Efficient Scheduling and Virtualization: Energy-aware techniques, including constrained energy models[[31](https://arxiv.org/html/2508.04334v2#bib.bib31)], Dynamic Voltage and Frequency Scaling (DVFS)[[32](https://arxiv.org/html/2508.04334v2#bib.bib32)], and queuing-based power optimization, aim to reduce energy use without SLA violations. Virtualization technologies such as live VM migration facilitate workload consolidation and energy savings. Despite these advances, many predictive models relying on hardware counters or VM energy profiles have limited scalability in distributed IoT due to static assumptions and linearity constraints[[33](https://arxiv.org/html/2508.04334v2#bib.bib33), [34](https://arxiv.org/html/2508.04334v2#bib.bib34)]. + +SCC-DSO Contributions: Unlike prior work, SCC-DSO integrates kernel-based execution time prediction with reinforcement learning and metaheuristic Ant Colony Optimization (ACO) for dynamic, heterogeneity-aware scheduling. Its tri-layer architecture adapts to workload variability, node heterogeneity, and data locality constraints, enabling robust, scalable, and energy-efficient task orchestration in complex IoT-cloud systems[[35](https://arxiv.org/html/2508.04334v2#bib.bib35), [36](https://arxiv.org/html/2508.04334v2#bib.bib36)]. + +Figure[1](https://arxiv.org/html/2508.04334v2#S2.F1 "Figure 1 ‣ II Related Work ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") illustrates a hybrid IoT-cloud architecture combining decentralized storage via IPFS with centralized orchestration. Edge sensor nodes generate multi-modal data streams processed locally and forwarded through MQTT brokers interfacing with MariaDB[[37](https://arxiv.org/html/2508.04334v2#bib.bib37)] and an IPFS private swarm, ensuring tamper resistance, fault tolerance, and rapid retrieval. This hybrid model enhances data integrity, responsiveness, and interoperability across heterogeneous IoT environments[[38](https://arxiv.org/html/2508.04334v2#bib.bib38)]. + +![Image 1: Refer to caption](https://arxiv.org/html/2508.04334v2/imgs/hybrid.png) + +Figure 1: Hybrid IoT-cloud architecture combines IPFS, MariaDB, and MQTT for scalable, reliable edge-to-cloud data management. + +III Proposed Evolutionary Algorithm +----------------------------------- + +The proposed architecture[2](https://arxiv.org/html/2508.04334v2#S3.F2 "Figure 2 ‣ III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") integrates a Hybrid Electro Search–Ant Colony Optimization (ES-ACO) algorithm, enabling efficient task scheduling in sensor cloud environments and delivering optimized performance for intelligent cloud-based irrigation control[[69](https://arxiv.org/html/2508.04334v2#bib.bib69)]. The diagram shows that cloud users submit tasks managed by cloud brokers and a task manager before forwarding them to the hybrid ES-ACO scheduler. This scheduler optimizes task allocation across virtual machines (VMs) hosted within a cloud data center and mapped to physical sensor nodes via resource managers’ integration of Wireless Sensor Networks (WSNs) data collection from a distributed computing environment. The multi-objective scheduler aims to minimize energy consumption, make span, and execution cost while maximizing throughput and reducing task rejection ratio. This approach ensures intelligent irrigation control, resource-efficient, and sustainable agricultural practices. + +The Sensor Cloud Computing and Data Scheduling Optimization (SCC-DSO) framework addresses performance bottlenecks in heterogeneous IoT-edge clusters by integrating reinforcement learning (RL), ant colony optimization (ACO), and predictive modeling. This multi-stage algorithm dynamically schedules tasks based on data locality, node capability, and network conditions to meet stringent latency and throughput requirements[[39](https://arxiv.org/html/2508.04334v2#bib.bib39)]. The IoT cluster is modeled as a graph G=(𝒱,ℰ)G=(\mathcal{V},\mathcal{E}), where 𝒱\mathcal{V} is the set of computing nodes that have heterogeneous processing capacity, memory, and I/O characteristics, and ℰ\mathcal{E} represents communication links. Input datasets are partitioned into fixed-size blocks (64 MB in HDFS), with replicas ensuring fault tolerance. Tasks are assigned to nodes hosting required data or fetching it with minimal overhead[[40](https://arxiv.org/html/2508.04334v2#bib.bib40)]. + +![Image 2: Refer to caption](https://arxiv.org/html/2508.04334v2/imgs/aco.jpeg) + +Figure 2: Hybrid ES-ACO Algorithm for Task Scheduling in a Sensor Cloud for Smart Irrigation + +The task-to-node assignment is formulated as a min-max optimization to minimize the makespan (maximum execution time across nodes) while respecting data locality and node capacity constraints: + +min 𝐱⁡max i∈𝒱​∑j∈𝒥 x i​j​T i​j\min_{\mathbf{x}}\max_{i\in\mathcal{V}}\sum_{j\in\mathcal{J}}x_{ij}T_{ij}(1) + +Subject to: + +∑i∈𝒱 x i​j\displaystyle\sum_{i\in\mathcal{V}}x_{ij}=1,∀j∈𝒥,\displaystyle=1,\quad\forall j\in\mathcal{J}, +∑j∈𝒥 x i​j​d j\displaystyle\sum_{j\in\mathcal{J}}x_{ij}d_{j}≤C i​(t),∀i∈𝒱,\displaystyle\leq C_{i}(t),\quad\forall i\in\mathcal{V}, +x i​j\displaystyle x_{ij}∈{0,1},∀i∈𝒱,j∈𝒥.\displaystyle\in\{0,1\},\quad\forall i\in\mathcal{V},j\in\mathcal{J}. + +where x i​j=1 x_{ij}=1 if a task j j is assigned to node i i, T i​j T_{ij} is the predicted execution time of the task j j on node i i, d j d_{j} is the data size required by task j j, and what C i​(t)C_{i}(t) is the dynamic computational capacity of the node i i at time t t. These constraints ensure each task is assigned to exactly one node while respecting capacity limits, minimizing the makespan. + +Task execution time T i​j T_{ij} is predicted using a kernel-based regression model to account for task and node heterogeneity: + +T i​j=∑s=1 S w s​K​(𝐱 j,𝐱 s)+b,T_{ij}=\sum_{s=1}^{S}w_{s}K(\mathbf{x}_{j},\mathbf{x}_{s})+b,(2) + +where 𝐱 j=[m j,𝐳 i]\mathbf{x}_{j}=[m_{j},\mathbf{z}_{i}] combines task data size m j m_{j} and node features 𝐳 i\mathbf{z}_{i} (e.g., CPU speed, memory), K​(𝐱 j,𝐱 s)=exp⁡(−‖𝐱 j−𝐱 s‖2 2​σ 2)K(\mathbf{x}_{j},\mathbf{x}_{s})=\exp\left(-\frac{\|\mathbf{x}_{j}-\mathbf{x}_{s}\|^{2}}{2\sigma^{2}}\right) is a Gaussian radial basis function kernel, {(𝐱 s,t s)}s=1 S\{(\mathbf{x}_{s},t_{s})\}_{s=1}^{S} are historical training data, w s w_{s} are learned coefficients, b b is a bias term, and σ∈[0.5,2.0]\sigma\in[0.5,2.0] (Table[I](https://arxiv.org/html/2508.04334v2#S3.T1 "TABLE I ‣ III-A Ant Colony Optimization (ACO) Scheduling Mechanism ‣ III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing")) is the kernel bandwidth. The learning rate γ∈[0.01,0.1]\gamma\in[0.01,0.1] tunes the model’s convergence. + +The total delay for a scheduling plan P P quantifies latency, combining transmission and processing delays: + +Delay​(P)=∑e∈ℰ​(P)(d e b e+q e)+∑v∈𝒱​(P)w v c v,\text{Delay}(P)=\sum_{e\in\mathcal{E}(P)}\left(\frac{d_{e}}{b_{e}}+q_{e}\right)+\sum_{v\in\mathcal{V}(P)}\frac{w_{v}}{c_{v}},(3) + +where d e d_{e} is the data size transferred over edge e e, b e b_{e} is the bandwidth, q e q_{e} is the queuing delay, w v w_{v} is the computational workload at node v v, and c v c_{v} is the node’s processing capacity. This metric ensures low-latency scheduling for IoT applications. + +Node computational efficiency guides task assignment to optimize resource utilization: + +Eff j,i=m j T i​j,\text{Eff}_{j,i}=\frac{m_{j}}{T_{ij}},(4) + +where m j m_{j} is the task data size and T i​j T_{ij} is it from Eq.([2](https://arxiv.org/html/2508.04334v2#S3.E2 "In III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing")). Higher efficiency indicates better suitability for task execution under ideal data locality. + +To ensure balanced execution across heterogeneous nodes, the weighted execution time is equalized: + +f i​∑j∈map j,i T j,i=f k​∑j∈map j,k T j,k,∀i,k∈𝒱,i≠k,f_{i}\sum_{j\in\text{map}_{j,i}}T_{j,i}=f_{k}\sum_{j\in\text{map}_{j,k}}T_{j,k},\quad\forall i,k\in\mathcal{V},i\neq k,(5) + +where there f i=1/c i f_{i}=1/c_{i} is a weighting factor based on node i i’s capacity c i c_{i}, and map j,i\text{map}_{j,i} denoting tasks assigned to node i i. This prevents bottlenecks by synchronizing completion times. + +### III-A Ant Colony Optimization (ACO) Scheduling Mechanism + +SCC-DSO adapts ACO with predictive modeling and heterogeneity-aware pheromone updates to solve NP-hard task-to-node scheduling problems. The process has four phases. Initialization: A graph G=(𝒱,ℰ)G=(\mathcal{V},\mathcal{E}) is initialized with pheromone trails τ i​j​(0)=τ 0∈[0.01,0.1]\tau_{ij}(0)=\tau_{0}\in[0.01,0.1] (Table[I](https://arxiv.org/html/2508.04334v2#S3.T1 "TABLE I ‣ III-A Ant Colony Optimization (ACO) Scheduling Mechanism ‣ III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing")), incorporating execution time predictions from Eq.([2](https://arxiv.org/html/2508.04334v2#S3.E2 "In III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing")). Solution Construction: Ants select task-node assignments using: + +P i​j=τ i​j α⋅η i​j β∑k∈𝒱 eligible τ i​k α⋅η i​k β,P_{ij}=\frac{\tau_{ij}^{\alpha}\cdot\eta_{ij}^{\beta}}{\sum_{k\in\mathcal{V}_{\text{eligible}}}\tau_{ik}^{\alpha}\cdot\eta_{ik}^{\beta}},(6) + +where η i​j=1/T i​j\eta_{ij}=1/T_{ij} is the heuristic desirability α∈[1.0,2.0]\alpha\in[1.0,2.0] and β∈[2.0,3.0]\beta\in[2.0,3.0] control pheromone and heuristic influence (Table[I](https://arxiv.org/html/2508.04334v2#S3.T1 "TABLE I ‣ III-A Ant Colony Optimization (ACO) Scheduling Mechanism ‣ III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing")), and 𝒱 eligible\mathcal{V}_{\text{eligible}} is the set of eligible nodes for task j j. + +Pheromone Update: High-quality schedules update pheromone trails: + +τ i​j←(1−ρ)​τ i​j+∑k=1 K Q L(k),\tau_{ij}\leftarrow(1-\rho)\tau_{ij}+\sum_{k=1}^{K}\frac{Q}{L^{(k)}},(7) + +where ρ∈[0.1,0.3]\rho\in[0.1,0.3] is the evaporation rate, Q∈[100,500]Q\in[100,500] is a constant, K∈[10,20]K\in[10,20] is the number of ants, and L(k)L^{(k)} is the makespan of the k k-th ant’s schedule (Table[I](https://arxiv.org/html/2508.04334v2#S3.T1 "TABLE I ‣ III-A Ant Colony Optimization (ACO) Scheduling Mechanism ‣ III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing")). A minimum pheromone trail δ∈[10−4,10−2]\delta\in[10^{-4},10^{-2}] prevents stagnation. + +Termination: The algorithm converges (within I∈[20,50]I\in[20,50] iterations, tolerance ϵ∈[10−3,10−2]\epsilon\in[10^{-3},10^{-2}]) by minimizing: + +J​(π)=w 1​Delay​(π)+w 2​Energy​(π)+w 3​LossPkt​(π),J(\pi)=w_{1}\text{Delay}(\pi)+w_{2}\text{Energy}(\pi)+w_{3}\text{LossPkt}(\pi),(8) + +where w 1,w 2,w 3∈[0.2,0.4]w_{1},w_{2},w_{3}\in[0.2,0.4] (sum to 1) weight delay (Eq.([3](https://arxiv.org/html/2508.04334v2#S3.E3 "In III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing"))), energy consumption, and packet loss, respectively. + +Dynamic Task Migration and Data Prefetching: Tasks migrate when the resource queue delay exceeds a threshold: + +R​Q i=r⋅B k ρ⋅V i,RQ_{i}=\frac{r\cdot B_{k}}{\rho\cdot V_{i}},(9) + +where R​Q i>φ RQ_{i}>\varphi and R​Q i−T​S i>φ RQ_{i}-TS_{i}>\varphi, with φ∈[0.05,0.1]⋅T​S i\varphi\in[0.05,0.1]\cdot TS_{i}, r r is the task’s resource demand, B k B_{k} is the data block size, ρ∈[0.1,0.3]\rho\in[0.1,0.3] is a scaling factor, and V i V_{i} is the node’s processing rate (Table[I](https://arxiv.org/html/2508.04334v2#S3.T1 "TABLE I ‣ III-A Ant Colony Optimization (ACO) Scheduling Mechanism ‣ III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing")). The function T​(⋅)T(\cdot) was removed as it was undefined; the expression is simplified as a direct ratio. The optimal source node minimizes: + +PLF T,S=(φ S−φ T)2(T S−T T)2,\text{PLF}_{T,S}=\sqrt{\frac{(\varphi_{S}-\varphi_{T})^{2}}{(T_{S}-T_{T})^{2}}},(10) + +where φ S,φ T∈[0.05,0.1]⋅T​S i\varphi_{S},\varphi_{T}\in[0.05,0.1]\cdot TS_{i} are thresholds, and what T S,T T T_{S},T_{T} are execution times of source and target nodes. Task migrations are limited to θ∈[1,5]\theta\in[1,5] tasks per node per iteration. + +TABLE I: Parameters and Ranges for SCC-DSO + +Table[I](https://arxiv.org/html/2508.04334v2#S3.T1 "TABLE I ‣ III-A Ant Colony Optimization (ACO) Scheduling Mechanism ‣ III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") lists the configuration parameters for SCC-DSO. The pheromone influence α\alpha balances exploration and exploitation, while β\beta heuristic information is emphasized (e.g., execution time). The evaporation rate ρ\rho and initial pheromone τ 0\tau_{0} regulate trail persistence. Parameters K K, I I, and w 1,w 2,w 3 w_{1},w_{2},w_{3} balance computational overhead and performance. The prefetch threshold φ\varphi, learning rate γ\gamma, migration limit θ\theta, and kernel bandwidth σ\sigma support predictive modeling and dynamic scheduling, ensuring scalability and adaptability in IoT-edge clusters. + +### III-B Lightweight Hybrid RL-ACO Methods + +The Sensor Cloud Computing and Data Scheduling Optimization (SCC-DSO) framework incorporates a lightweight hybrid scheduler that combines Reinforcement Learning (RL) with Ant Colony Optimization (ACO), specifically designed for resource-constrained edge devices in IoT–edge clusters. This approach minimizes computational overhead while preserving high scheduling accuracy, making it well-suited for latency-sensitive applications such as autonomous driving, industrial robotics, and intelligent surveillance systems. RL is employed to learn adaptive task-node selection policies based on dynamic network conditions, while ACO refines these selections through heuristic-guided pheromone updates to achieve near-optimal scheduling decisions[1](https://arxiv.org/html/2508.04334v2#algorithm1 "In III-B Lightweight Hybrid RL-ACO Methods ‣ III Proposed Evolutionary Algorithm ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing"). + +Computational Complexity: Let |T||T| be the number of tasks, |N||N| the number of nodes, and I I the number of ACO iterations. The RL policy inference operates in 𝒪​(|T|​|N|)\mathcal{O}(|T||N|) per scheduling round, while the ACO refinement contributes 𝒪​(I​|T|​|N|)\mathcal{O}(I|T||N|) due to pheromone and heuristic updates. Thus, the overall complexity is: + +𝒪​(|T|​|N|​(1+I))\mathcal{O}(|T||N|(1+I)) + +Given that I≪|T|I\ll|T| in our lightweight configuration, the scheduler maintains near-linear scalability concerning the task volume. + +Input:Task queue T T, node set N N, resource capacities R R + +Output:Optimized task-node assignment + +Initialize RL policy + +π θ\pi_{\theta} +and pheromone matrix + +τ\tau +; + +while _Tasks remain in T T_ do + +Extract state features from the current cluster load; + +Select candidate task-node pairs using + +π θ\pi_{\theta} +; + +Apply ACO refinement: + +1. 1.Compute heuristic desirability η i​j\eta_{ij} for each task-node pair +2. 2.Update pheromone trails τ i​j\tau_{ij} based on solution quality +3. 3.Select path with maximum τ i​j⋅η i​j\tau_{ij}\cdot\eta_{ij} + +Execute selected tasks locally where possible; + +Update + +π θ\pi_{\theta} +using observed latency and resource utilization; + +return Optimized assignment plan; + +Algorithm 1 Lightweight Hybrid RL-ACO Scheduler + +Latency Model: The end-to-end scheduling latency L t​o​t​a​l L_{total} is modeled as: + +L t​o​t​a​l=L c​o​m​p+L t​r​a​n​s+L q​u​e​u​e L_{total}=L_{comp}+L_{trans}+L_{queue} + +where L c​o​m​p L_{comp} denotes computation delay for policy inference and ACO updates, L t​r​a​n​s L_{trans} is the transmission delay between nodes, and L q​u​e​u​e L_{queue} is the queuing delay before task execution. By enabling local execution and reducing L t​r​a​n​s L_{trans}, the proposed method achieves substantial latency savings: + +L t​o​t​a​l Hybrid100 n>100). + +![Image 4: Refer to caption](https://arxiv.org/html/2508.04334v2/imgs/im1.png) + +Figure 4: Optimal data block placement in a heterogeneous IoT-edge cluster, balancing workloads based on node computational efficiencies (Eq.([18](https://arxiv.org/html/2508.04334v2#S4.E18 "In item 2. ‣ IV-B Heterogeneous Sensing Data Placement ‣ IV Data Block Placement Method ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing"))). Grid-based block partitioning strategy for parallel spatial data processing. + +Figure[4](https://arxiv.org/html/2508.04334v2#S4.F4 "Figure 4 ‣ IV-B Heterogeneous Sensing Data Placement ‣ IV Data Block Placement Method ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") illustrates that a grid-based block partitioning strategy is widely adopted in parallel and distributed computing for spatial data analysis, matrix computation, and domain decomposition techniques. The grid is divided into nine sub-blocks B 1 B_{1} through B 9 B_{9}, representing distinct portions of the computational domain. The color scheme implies functional segregation: the gray blocks (B 1 B_{1}–B 4 B_{4}) denote initial or pre-processed data regions, the blue blocks (B 5 B_{5}–B 6 B_{6}) correspond to core computation zones, while the green blocks (B 7 B_{7}–B 9 B_{9}) represent final or post-processing regions. This hierarchical decomposition facilitates enhanced computational efficiency, optimized memory locality, and balanced workload distribution—critical components in high-performance data processing. Moreover, the structured layout supports region-wise parallel execution and reduces inter-process communication overhead, offering scalable performance for large-scale scientific simulations and AI-driven spatial modeling applications. + +### IV-C Queue-Aware Dynamic Data Placement Optimization + +In perceptual cloud computing environments supporting large-scale IoT workloads, input jobs are partitioned into fixed-size blocks and distributed using multi-replica strategies to maximize fault tolerance. While this enhances data availability, it introduces task redundancy where identical blocks exist across multiple nodes, resulting in bandwidth contention, task duplication, and resource underutilization. Our proposed SCC-DSO algorithm introduced a novel hybrid scheduling approach that dynamically reorders task queues by correlating block affinity, predicted task cost, and node load, thereby minimizing redundancy and improving scheduling precision across the cluster. + +Conversely, multidimensional performance metrics propagation delay, bandwidth, jitter, packet loss, and ACO-based cost, enable robust, real-time scheduling under strict latency and energy constraints in IoT environments for optimizing scheduling in heterogeneous sensor-cloud infrastructures. The delay function Delay​(e):E→ℝ+\text{Delay}(e):E\rightarrow\mathbb{R}^{+} represents the expected transmission delay incurred over edge e∈E e\in E, where E E denotes the set of communication links within the cluster. This metric reflects the latency contribution of each link and is influenced by factors such as link bandwidth, queuing delay, and traffic congestion. Complementing this, the delay jitter function DelayJit​(e):E→ℝ+\text{DelayJit}(e):E\rightarrow\mathbb{R}^{+} models the variability or instability of transmission delay along edge e e, which is particularly important for real-time applications that are sensitive to timing inconsistencies[[46](https://arxiv.org/html/2508.04334v2#bib.bib46)]. + +This approach ensures that task assignments are independent across nodes, maximizes data locality, and prevents unnecessary cross-node communication, improving overall system performance. For example, consider an IoT cluster comprising n=5 n=5 nodes, where the job input is partitioned into B=26 B=26 data blocks, each replicated twice to provide fault tolerance[[59](https://arxiv.org/html/2508.04334v2#bib.bib59)]. The SCC-DSO algorithm produces an optimized block assignment in which Node 1 stores blocks m 0 m_{0} through m 5 m_{5}, Node 2 stores blocks m 6 m_{6} through m 11 m_{11}, Node 3 holds blocks m 12 m_{12} through m 17 m_{17}, Node 4 contains blocks m 18 m_{18} through m 21 m_{21}, and Node 5 is assigned blocks m 22 m_{22} through m 25 m_{25}. This allocation ensures that task queues can be reorganized to prioritize local data processing, eliminate redundant task scheduling, and achieve balanced, high-throughput execution across the cluster. Figure[5](https://arxiv.org/html/2508.04334v2#S4.F5 "Figure 5 ‣ IV-C Queue-Aware Dynamic Data Placement Optimization ‣ IV Data Block Placement Method ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") illustrates task queue states before and after SCC-DSO-based optimization. Post-optimization, task queues become disjoint across nodes, ensuring workload independence and locality[[60](https://arxiv.org/html/2508.04334v2#bib.bib60)]. + +![Image 5: Refer to caption](https://arxiv.org/html/2508.04334v2/imgs/dso-shed.png) + +Figure 5: Optimization of Data Scheduling Queues + +The cost function Cost​(e):E→ℝ+\text{Cost}(e):E\rightarrow\mathbb{R}^{+} quantifies the resource consumption or operational expense associated with transmitting data over edge e e. This cost may include energy expenditure, monetary cost in pay-per-use networks, or opportunity cost associated with bandwidth allocation. Finally, the packet loss function LossPkt​(v):V→ℝ+\text{LossPkt}(v):V\rightarrow\mathbb{R}^{+} captures the probability of packet loss at node v∈V v\in V, where V V represents the set of nodes in the cluster. Packet loss may arise from buffer overflows, hardware failures, or link-layer retransmission limits, and directly impacts the reliability of data delivery functions, providing a comprehensive framework for evaluating end-to-end quality of service (QoS) across scheduling paths[[61](https://arxiv.org/html/2508.04334v2#bib.bib61)]. + +Path Performance Metrics: In heterogeneous IoT cluster environments, accurately modeling the performance of scheduling paths is essential for effective task placement and resource management. We define several critical functions to evaluate and optimize end-to-end performance along a scheduling path P T​(s,u)P_{T(s,u)}, which spans from a source node s s to a destination node u u within the scheduling tree T T. The _cumulative delay_ encountered along this path is given by: + +Delay​(P T​(s,u))=∑e∈P T​(s,u)Delay​(e)+∑v∈P T​(s,u)Delay​(v)\text{Delay}\big(P_{T(s,u)}\big)=\sum_{e\in P_{T(s,u)}}\text{Delay}(e)+\sum_{v\in P_{T(s,u)}}\text{Delay}(v)(23) + +where Delay​(e)\text{Delay}(e) denotes the transmission delay across edge e e, and Delay​(v)\text{Delay}(v) captures queuing or processing delay at node v v. This aggregate metric quantifies the end-to-end latency for task execution or data transfer. + +The _total cost_ associated with resource or energy consumption along the path is: + +Cost​(P T​(s,u))=∑e∈P T​(s,u)Cost​(e)+∑v∈P T​(s,u)Cost​(v)\text{Cost}\big(P_{T(s,u)}\big)=\sum_{e\in P_{T(s,u)}}\text{Cost}(e)+\sum_{v\in P_{T(s,u)}}\text{Cost}(v)(24) + +where Cost​(e)\text{Cost}(e) and Cost​(v)\text{Cost}(v) represent the cost contributions of edges and nodes, respectively. This metric is vital for energy-constrained IoT systems or cost-optimized service models. The _effective bandwidth_ of the path is constrained by its weakest component: + +Bandwidth​(P T​(s,u))=min x∈P T​(s,u)⁡Bandwidth​(x)\text{Bandwidth}\big(P_{T(s,u)}\big)=\min_{x\in P_{T(s,u)}}\text{Bandwidth}(x)(25) + +where Bandwidth​(x)\text{Bandwidth}(x) indicates the capacity of the node or edge x x. This ensures that the path’s throughput aligns with its bottleneck resource. The _cumulative delay jitter_, measuring variability in transmission and processing time, is defined as: + +DelayJit​(P T​(s,u))=∑e∈P T​(s,u)DelayJit​(e)+∑v∈P T​(s,u)DelayJit​(v)\text{DelayJit}\big(P_{T(s,u)}\big)=\sum_{e\in P_{T(s,u)}}\text{DelayJit}(e)+\sum_{v\in P_{T(s,u)}}\text{DelayJit}(v)(26) + +where DelayJit​(e)\text{DelayJit}(e) and DelayJit​(v)\text{DelayJit}(v) denote jitter contributions of edges and nodes, respectively. This is especially important for real-time and latency-sensitive applications. + +The _cumulative packet loss probability_ is modeled as: + +LossPkt​(P T​(s,u))=1−∏v∈P T​(s,u)(1−LossPkt​(v))\text{LossPkt}\big(P_{T(s,u)}\big)=1-\prod_{v\in P_{T(s,u)}}\big(1-\text{LossPkt}(v)\big)(27) + +assuming independent packet loss events across nodes. This computes the likelihood of at least one packet loss over the entire path, impacting reliability. These performance functions collectively support multi-objective optimization for scheduling decisions, allowing dynamic balancing of latency, jitter, cost, bandwidth, and reliability in complex IoT systems[[62](https://arxiv.org/html/2508.04334v2#bib.bib62)]. + +##### Optimization Objective + +is applied to achieve balanced workload distribution and minimize straggler effects in IoT clusters; the following min-max objective is adopted: + +min⁡max 1≤i≤n⁡{∑j=1 f​(i)t​(Node i,App,map j)}\min\max_{1\leq i\leq n}\left\{\sum_{j=1}^{f(i)}t\big(\text{Node}_{i},\text{App},\text{map}_{j}\big)\right\}(28) + +Where t​(⋅)t(\cdot) denotes task completion time for a given node-task pair and f​(i)f(i) is the number of data blocks allocated to node Node i\text{Node}_{i}. The goal is to minimize the longest node execution time and reduce overall job latency. + +This is subject to: + +s.t.1≤j≤n;∑i=1 n f(i)=B,f(i)≥0 s.t.\quad 1\leq j\leq n;\quad\sum_{i=1}^{n}f(i)=B,\quad f(i)\geq 0(29) + +Where B B denotes the total number of data blocks, with constraints ensuring a valid, non-negative distribution across nodes. We are integrating path-level performance functions into the min-max scheduling framework. The approach prevents bottlenecks and enables context-aware scheduling essential for optimizing throughput and latency in heterogeneous IoT clusters. + +### IV-D Adaptive Data Prefetching for Task Migration + +In perceptual cloud computing environments, where heterogeneous IoT nodes operate under dynamic and often unpredictable workloads, maintaining an efficient execution pipeline is critical for minimizing latency and maximizing throughput. Task queues at each node serve as a temporal scheduling buffer, determining the execution order of mapped tasks based on local and global system states. Formally, the set of active worker nodes in the cluster is represented as Node={Node 1,Node 2,…,Node n}\text{Node}=\{\text{Node}_{1},\text{Node}_{2},\dots,\text{Node}_{n}\}, where each node Node i\text{Node}_{i} maintains a corresponding task queue Q Node i={map i​1,map i​2,…,map i​s}Q_{\text{Node}_{i}}=\{\text{map}_{i1},\text{map}_{i2},\dots,\text{map}_{is}\}. This queue-aware mechanism not only preserves temporal coherence in task execution but also significantly enhances load balancing and system responsiveness, especially under bursty or adversarial traffic conditions in edge-cloud IoT infrastructures. Node Selection Time Threshold: Define the node selection time threshold λ\lambda as: + +λ=1−θ​n m,1≤θ<⌊m n⌋\lambda=1-\theta\frac{n}{m},\quad 1\leq\theta<\left\lfloor\frac{m}{n}\right\rfloor(30) + +where m m is the total number of data schedules, n n is the number of working nodes, and θ\theta controls the sensitivity of selection timing. + +![Image 6: Refer to caption](https://arxiv.org/html/2508.04334v2/imgs/PSCode.png) + +Figure 6: Pseudocode Algorithm + +In Pseudocode Algorithm Table[6](https://arxiv.org/html/2508.04334v2#S4.F6 "Figure 6 ‣ IV-D Adaptive Data Prefetching for Task Migration ‣ IV Data Block Placement Method ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") presents the SCC-DSO framework, which integrates hybrid intelligence through a combination of data-driven prediction (via kernel regression), probabilistic decision-making (via pheromone-based ACO), and dynamic threshold-based prefetching strategies. The Resource Quotient (RQ) and Prefetch Load Factor (PLF) introduce novel, real-time metrics for adaptive task migration, minimizing I/O bottlenecks. This multi-layer design enhances scheduling convergence while balancing load, energy, and latency in heterogeneous IoT-cloud topologies. + +##### Remaining Completion Time + +The estimated remaining completion time of a task queue Q Node i Q_{\text{Node}_{i}} is: + +R​(Q Node i)=T​(r+B k​(1−ρ)​V i¯)R(Q_{\text{Node}_{i}})=T\left(r+B_{k}(1-\rho)\overline{V_{i}}\right)(31) + +where: + +V i¯=1 m−r−1​∑j=1 m−r−1 B j t j\overline{V_{i}}=\frac{1}{m-r-1}\sum_{j=1}^{m-r-1}\frac{B_{j}}{t_{j}} + +Here r r is the number of unscheduled data schedules, B k B_{k} is the input block size of the ongoing task map k\text{map}_{k}, ρ\rho is its progress, and t j t_{j} the execution time of completed tasks. + +##### Migration Conditions + +Migration is permitted if: + +R​(T​Queue)\displaystyle R(T\text{Queue})>φ\displaystyle>\varphi(32) +R​(S​Queue)−T​(SNode)\displaystyle R(S\text{Queue})-T(\text{SNode})>φ\displaystyle>\varphi(33) + +where φ\varphi is the data prefetch delay T​(SNode)T(\text{SNode}) is the predicted execution time at the source node. These conditions ensure locality and avoid redundant scheduling. + +##### Prefetch Load Factor + +The prefetch load factor between a destination node TNode and a candidate source node CNode i\text{CNode}_{i} is defined as: + +PLFactor​(TNode,CNode i)=(φ i−φ t)2+(T i−T t)2\text{PLFactor}(\text{TNode},\text{CNode}_{i})=\sqrt{(\varphi_{i}-\varphi_{t})^{2}+(T_{i}-T_{t})^{2}}(34) + +where φ i\varphi_{i} is the prefetch delay from CNode i\text{CNode}_{i}, φ t\varphi_{t} is the target prefetch delay, T i T_{i} and T t T_{t} represents current network connection counts. + +The _Source Worker Node Choosing (SWNC)_ algorithm selects the optimal replica node CNode i\text{CNode}_{i} by minimizing PLFactor​(TNode,CNode i)\text{PLFactor}(\text{TNode},\text{CNode}_{i}), which integrates queue depth, network load, and data locality. Accounting for intra- and inter-rack latency (φ 1,φ 2,φ 3\varphi_{1},\varphi_{2},\varphi_{3}), SWNC queries each replica’s location and load, computes PLFactors, and selects the lowest-cost node. The proposed _Sensor Cloud Computing and Data Scheduling Optimization (SCC-DSO)_ framework achieves up to a 30% reduction in job completion time over baseline methods. Assuming uniform block execution time t b t_{b}, the original schedule T original=25×t b T_{\text{original}}=25\times t_{b} is reduced to T optimized=17.5×t b T_{\text{optimized}}=17.5\times t_{b}. This gain stems from SCC-DSO’s intelligent placement and migration strategies. A core component, the _Source Node Weight and Network Cost (SWNC)_ algorithm, evaluates the _Prefetch Load Factor (PLFactor)_, a composite metric of node load, bandwidth, and data locality to select the optimal data source by minimizing PLFactor. SWNC reduces transfer overhead, avoids congestion, and enhances execution efficiency across heterogeneous IoT clusters[[46](https://arxiv.org/html/2508.04334v2#bib.bib46), [47](https://arxiv.org/html/2508.04334v2#bib.bib47)]. + +Fig.[7](https://arxiv.org/html/2508.04334v2#S4.F7 "Figure 7 ‣ Prefetch Load Factor ‣ IV-D Adaptive Data Prefetching for Task Migration ‣ IV Data Block Placement Method ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") presents the comparative evaluation of the proposed _SCC-DSO_ algorithm against the RF-FD and RSYNC baselines across four levels of data locality, parameterized by θ∈{0.2,0.4,0.6,0.8}\theta\in\{0.2,0.4,0.6,0.8\}, where θ\theta denotes the proportion of data block replication within the cluster. As θ\theta increases, redundancy grows, amplifying contention and scheduling complexity. The results consistently demonstrate that SCC-DSO achieves superior runtime efficiency, maintaining a lower execution time across all block sizes and replication factors. Notably, under high-redundancy conditions (θ=0.8\theta=0.8), SCC-DSO exhibits up to 32% lower latency compared to RF-FD, attributed to its _data-aware queue reordering_ and _storage-centric task localization_ strategy. This adaptive scheduling mechanism introduces a novel layer of _queue elasticity_ that mitigates bottlenecks in distributed file systems while preserving throughput stability under stress conditions. + +![Image 7: Refer to caption](https://arxiv.org/html/2508.04334v2/imgs/plot1.png) + +Figure 7: Comparison of data block sizes and running time under varying replication factors θ\theta. SCC-DSO consistently outperforms RF-FD and RSYNC, especially under high-redundancy scenarios. + +### IV-E SCC-DSO Algorithm: Scheduling and Placement + +The Sensor Cloud Computing and Data Scheduling Optimization (SCC-DSO) framework introduces a predictive, adaptive algorithm for task scheduling and data placement in heterogeneous IoT-edge clusters, modeled as a graph G=(𝒱,ℰ)G=(\mathcal{V},\mathcal{E}). The algorithm employs a closed-loop, seven-stage process integrating reinforcement learning (RL) and ant colony optimization (ACO) to maximize data locality, minimize execution latency, and adapt to dynamic workloads. Evaluated on a 100-node heterogeneous cluster, SCC-DSO achieves a 22.4% reduction in execution time and 93.1% data locality compared to baselines like RF-FD and RSYNC[[63](https://arxiv.org/html/2508.04334v2#bib.bib63)]. The stages are detailed below, with key equations numbered for clarity. + +1. 1.Predictive Modeling: A kernel-based regression model predicts task execution times T i​j T_{ij} for tasks j j on node i i: + +T i​j=ℳ kernel​(x i​j),x i​j=[|b j|,CPU i,MEM i,IO i],T_{ij}=\mathcal{M}_{\text{kernel}}(x_{ij}),\quad x_{ij}=[|b_{j}|,\text{CPU}_{i},\text{MEM}_{i},\text{IO}_{i}],(35) + +where |b j||b_{j}| is the data block size (MB), and CPU i\text{CPU}_{i}, MEM i\text{MEM}_{i}, IO i\text{IO}_{i} are node i i’s processing speed (GHz), memory (GB), and I/O rate (MB/s). The model uses a Gaussian radial basis function kernel with bandwidth σ=1.0\sigma=1.0 and learning rate γ=0.05\gamma=0.05, trained on historical task and system metrics (Table II). +2. 2.Min-Max Optimization: Tasks are assigned to nodes by solving a min-max optimization problem to minimize the maximum execution time while maximizing data locality: + +min π⁡max i∈𝒱​∑j∈𝒯 T i​j⋅x i​j,s.t.∑i∈𝒱 x i​j=1,∑j∈𝒯 x i​j≤C i,\min_{\pi}\max_{i\in\mathcal{V}}\sum_{j\in\mathcal{T}}T_{ij}\cdot x_{ij},\quad\text{s.t.}\quad\sum_{i\in\mathcal{V}}x_{ij}=1,\quad\sum_{j\in\mathcal{T}}x_{ij}\leq C_{i},(36) + +where π\pi is the task assignment, x i​j∈{0,1}x_{ij}\in\{0,1\} is a binary indicator for assigning a task j j to node i i, and C i C_{i} is node i i’s capacity (e.g., CPU cycles). This ensures balanced load and high locality. +3. 3.Task Queue Reordering: Node task queues are reordered to prioritize tasks with local data access, reducing network overhead. The node efficiency is: + +Eff j,i=Locality j,i T i​j,\text{Eff}_{j,i}=\frac{\text{Locality}_{j,i}}{T_{ij}},(37) + +where Locality j,i=1\text{Locality}_{j,i}=1 if the data block b j b_{j} is local to node i i, else 0. Tasks are sorted so as Eff j,i\text{Eff}_{j,i} to optimize scheduling. +4. 4.Runtime Monitoring: Runtime metrics (e.g., node workload w v w_{v}, queue delay q e q_{e}) are monitored to identify stragglers. The resource quotient is: + +R​Q i=T total⋅r task⋅|b k|rate i⋅V i,RQ_{i}=T_{\text{total}}\cdot\frac{r_{\text{task}}\cdot|b_{k}|}{\text{rate}_{i}\cdot V_{i}},(38) + +where T total T_{\text{total}} is the total execution time, r task r_{\text{task}} is the task resource demand, rate i\text{rate}_{i} is the node processing rate (GHz), and V i V_{i} is the node capacity (cycles). Nodes with R​Q i>φ=0.075⋅T​S i RQ_{i}>\varphi=0.075\cdot TS_{i} (where T​S i TS_{i} is the node’s throughput) trigger migrations. +5. 5.Migration Candidate Validation: Migration candidates are validated by assessing local data presence and selecting optimal prefetch sources using the prefetch load factor: + +PLF T,S=(ϕ S−ϕ T)2(T S−T T)2+ϵ,\text{PLF}_{T,S}=\sqrt{\frac{(\phi_{S}-\phi_{T})^{2}}{(T_{S}-T_{T})^{2}+\epsilon}},(39) + +where ϕ S,ϕ T∈[0,1]\phi_{S},\phi_{T}\in[0,1] are source and target node utilizations, what T S,T T T_{S},T_{T} are execution times, and ϵ=10−6\epsilon=10^{-6} how to prevent division by zero. The source node S∗=arg⁡min S⁡PLF T,S S^{*}=\arg\min_{S}\text{PLF}_{T,S} is selected. +6. 6.Bandwidth-Aware Prefetching: Predictive prefetching ensures data availability with minimal interference, constrained by bandwidth b e b_{e} (MB/s) and a migration limit of θ=3\theta=3 tasks per node per iteration. Prefetch decisions are guided by PLF T,S\text{PLF}_{T,S}. +7. 7.Task Integration and Adaptation: Migrated tasks are integrated into locality-aware queues using RL-guided ACO. The transition probability is: + +P i​j=τ i​j α⋅η i​j β∑k∈EligibleNodes τ k​j α⋅η k​j β,P_{ij}=\frac{\tau_{ij}^{\alpha}\cdot\eta_{ij}^{\beta}}{\sum_{k\in\text{EligibleNodes}}\tau_{kj}^{\alpha}\cdot\eta_{kj}^{\beta}},(40) + +where τ i​j\tau_{ij} is the pheromone level, η i​j=1/T i​j\eta_{ij}=1/T_{ij} is the heuristic desirability, α=0.8\alpha=0.8, and β=1.2\beta=1.2. Pheromone updates follow: + +τ i​j←(1−ρ)⋅τ i​j+Δ​τ i​j(b​e​s​t),\tau_{ij}\leftarrow(1-\rho)\cdot\tau_{ij}+\Delta\tau_{ij}^{(best)},(41) + +where ρ=0.1\rho=0.1 and Δ​τ i​j(b​e​s​t)=1/T i​j(b​e​s​t)\Delta\tau_{ij}^{(best)}=1/T_{ij}^{(best)}. The global objective is: + +J​(π)=w 1⋅Delay​(π)+w 2⋅Cost​(π)+w 3⋅LossPkt​(π),J(\pi)=w_{1}\cdot\text{Delay}(\pi)+w_{2}\cdot\text{Cost}(\pi)+w_{3}\cdot\text{LossPkt}(\pi),(42) + +with weights w 1=0.5 w_{1}=0.5, w 2=0.3 w_{2}=0.3, w 3=0.2 w_{3}=0.2. + +![Image 8: Refer to caption](https://arxiv.org/html/2508.04334v2/imgs/plot2.png) + +Figure 8: Data Execution Rate (%) comparison across data block sizes and replication factors θ\theta. SCC-DSO consistently outperforms RF-FD and RSYNC, especially under high redundancy. + +Fig.[8](https://arxiv.org/html/2508.04334v2#S4.F8 "Figure 8 ‣ IV-E SCC-DSO Algorithm: Scheduling and Placement ‣ IV Data Block Placement Method ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") illustrates the comparative data execution efficiency of SCC-DSO, RSYNC, and RF-FD under diverse replication ratios θ∈{0.2,0.4,0.6,0.8}\theta\in\{0.2,0.4,0.6,0.8\}. Data availability improves as the replication factor increases, but at the cost of redundant task mapping and inter-node contention. Unlike traditional strategies that treat replication as static overhead, SCC-DSO leverages a _replica-aware reordering mechanism_ that dynamically prioritizes locally available blocks while deferring duplicate processing. This results in a marked execution rate improvement, with SCC-DSO achieving a consistent upward trajectory across all data block sizes. At θ=0.8\theta=0.8, the algorithm nearly saturates the execution ceiling, achieving over 98% execution rate, while RSYNC and RF-FD plateau at 96% and 90%, respectively. This gain stems from SCC-DSO’s novel _block-congestion forecasting_ combined with _queue reshuffling heuristics_, which intelligently adapt to spatial data skew and transient cluster load. + +V Experimental Evaluation +------------------------- + +We implemented and evaluated the novel _Sensor Cloud Computing and Data Scheduling Optimization_ (SCC-DSO) framework in a heterogeneous IoT-cloud environment comprising 50 nodes with diverse hardware profiles, including high-performance Intel. AMD compute nodes alongside ARM-based edge devices. The cluster was interconnected via a 1 Gbps, low-latency Ethernet network, orchestrated by Kubernetes, and utilized HDFS for distributed storage. This heterogeneous setup mirrors real-world IoT-cloud deployment challenges[[64](https://arxiv.org/html/2508.04334v2#bib.bib64)]. + +Our evaluation comprised two phases. Phase I assessed SCC-DSO under single-replica scheduling with varying block sizes (16–64 MB) and network loads (10–80%). The key metrics of job execution time (T exec T_{\text{exec}}), data locality (R loc R_{\text{loc}}), and throughput (T thr T_{\text{thr}}) were benchmarked against traditional RF-FD and RSYNC baselines. Phase II introduced multi-replica scheduling (RF=2) to emulate node failures and bandwidth fluctuations, evaluating cross-node traffic (V net V_{\text{net}}) and recovery latency (T rec T_{\text{rec}}). + +TABLE II: Summary of Experimental Setup + +SCC-DSO’s novelty lies in its integration of kernel regression for accurate execution time prediction and an adaptive task placement strategy powered by Ant Colony Optimization (ACO) with empirically tuned parameters (α=0.8\alpha=0.8, β=1.2\beta=1.2). This hybrid approach dynamically optimizes data locality and system resilience in heterogeneous, fluctuating environments. Experiments were conducted on a private cloud platform with 50 repetitions per configuration. Metrics were aggregated using statistical measures with 95% confidence intervals, monitored through local data placement to ensure reproducibility and transparency. Table[II](https://arxiv.org/html/2508.04334v2#S5.T2 "TABLE II ‣ V Experimental Evaluation ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") details the experimental configuration. + +VI Experimental Results and Analysis +------------------------------------ + +This study rigorously evaluates the SCC-DSO (Sensor Cloud Computing and Data Scheduling Optimization) algorithm in realistic, heterogeneous IoT-cloud environments, focusing on its efficacy, scalability, and scheduling stability. Recognizing the pivotal influence of replication factors in distributed systems like HDFS, affecting network overhead, contention, and queue balance, experiments simulate diverse cluster conditions. SCC-DSO is benchmarked against two established baselines: the RF-FD (Reservation First-Fit and Feedback Distribution) algorithm and the RSYNC protocol[[65](https://arxiv.org/html/2508.04334v2#bib.bib65)]. + +Single-Copy Data Scenario: In the minimal-redundancy setting, SCC-DSO demonstrates superior adaptability across varying file sizes and load intensities. Unlike RSYNC, which is constrained to single-copy synchronization, SCC-DSO effectively maintains data locality while mitigating latency and imbalance, outperforming traditional methods under heterogeneous, high-stress conditions. + +TABLE III: Task Completion Time Comparison (Mean ±\pm SD) for Varying File Sizes + +Table[III](https://arxiv.org/html/2508.04334v2#S6.T3 "TABLE III ‣ VI Experimental Results and Analysis ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") demonstrates that SCC-DSO consistently achieves lower task completion times than RF-FD and RSYNC across all file sizes. This improvement highlights SCC-DSO’s superior data locality and latency reduction capabilities in minimal redundancy settings. + +TABLE IV: Task Locality Ratio (%) Across Different Cluster Sizes (Mean ±\pm SD) + +Table[IV](https://arxiv.org/html/2508.04334v2#S6.T4 "TABLE IV ‣ VI Experimental Results and Analysis ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") quantifies data locality improvements as cluster size increases. SCC-DSO outperforms both baselines with locality ratios exceeding 85%, confirming its ability to enhance task-data proximity and reduce cross-node data transfers in scalable environments. + +Multi-Copy Replication Scenario: The second experimental phase evaluates SCC-DSO under multi-copy replication, reflecting fault-tolerant distributed storage. RF-FD is the primary baseline, given its multi-replica scheduling design. SCC-DSO demonstrates superior performance across varying block sizes, bandwidth fluctuations, and node availability, achieving up to 99% data locality for 64 MB blocks[[65](https://arxiv.org/html/2508.04334v2#bib.bib65)]. These gains stem from SCC-DSO’s hybrid approach combining kernel regression for execution time prediction, bandwidth-aware cost modeling, and reinforcement learning–driven prefetching. + +TABLE V: Cluster Throughput (MB/s) vs. Replication Factor (Mean ±\pm SD) + +Table[V](https://arxiv.org/html/2508.04334v2#S6.T5 "TABLE V ‣ VI Experimental Results and Analysis ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") illustrates SCC-DSO’s consistently higher throughput across replication factors, indicating robustness in bandwidth utilization and fault-tolerance scenarios. + +Straggler Simulation and Scalability: Finally, to evaluate resilience against slow-performing nodes (stragglers), completion times are measured under simulated straggler conditions. + +TABLE VI: Completion Time (s) under Straggler Simulation Across Node Counts (Mean ±\pm SD) + +Table[VI](https://arxiv.org/html/2508.04334v2#S6.T6 "TABLE VI ‣ VI Experimental Results and Analysis ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") highlights SCC-DSO’s ability to mitigate straggler impact, resulting in substantially lower completion times, thus supporting scalability and robustness in large-scale clusters. + +TABLE VII: Comparative Analysis of SCC-DSO and Baseline Scheduling Algorithms + +1 RF-FD: Replication Factor-based Fair Distribution Scheduler. + +2 RL-Sched: Reinforcement Learning-based Scheduler. + +Table[VII](https://arxiv.org/html/2508.04334v2#S6.T7 "TABLE VII ‣ VI Experimental Results and Analysis ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") summarizes the performance of SCC-DSO against RF-FD, RSYNC, and RL-Sched across key metrics. SCC-DSO achieves the highest data locality (93.1%) and lowest execution time, with excellent scalability and adaptability. Its hybrid RL+ACO design ensures efficient scheduling with moderate complexity, outperforming existing approaches in dynamic IoT-cloud environments. + +![Image 9: Refer to caption](https://arxiv.org/html/2508.04334v2/imgs/rf-shed.png) + +Figure 9: Comparison of SCC-DSO and baseline schedulers across key performance categories. Higher values denote better performance, except for complexity, where lower values indicate greater efficiency. + +### VI-A SCC-DSO performance under single-copy conditions + +This subsection evaluates SCC-DSO in low-cost and high-overhead scenarios, focusing on a single-copy HDFS replication setting within a heterogeneous IoT cluster connected via Gigabit Ethernet. Each test was conducted 50 times, and results reflect the average metrics. SCC-DSO achieves a 13% reduction in execution time compared to RF-FD and a 7% gain over RSYNC under low network load. These improvements underscore SCC-DSO’s efficiency in environments with limited network contention[[66](https://arxiv.org/html/2508.04334v2#bib.bib66)]. SCC-DSO, by contrast, integrates a predictive performance model that accurately profiles node compute capabilities and allocates data accordingly. It dynamically adjusts scheduling by monitoring runtime queue states and proactively triggers data prefetching for tasks anticipated to execute on non-local nodes. This preemptive strategy reduces idle time and mitigates bandwidth contention. The resultant alignment between data locality and node performance ensures reduced execution time, minimized data migration, and improved throughput under single-copy storage constraints. These findings validate SCC-DSO’s efficacy in optimizing data placement and scheduling in bandwidth-rich, compute-diverse IoT environments[[66](https://arxiv.org/html/2508.04334v2#bib.bib66)]. + +![Image 10: Refer to caption](https://arxiv.org/html/2508.04334v2/imgs/plot3.png) + +Figure 10: Runtime comparison of RF-FD, RSYNC, and SCC-DSO under varying data block sizes and synchronization thresholds (θ\theta). SCC-DSO consistently shows superior scalability and lower running time. + +Figure[10](https://arxiv.org/html/2508.04334v2#S6.F10 "Figure 10 ‣ VI-A SCC-DSO performance under single-copy conditions ‣ VI Experimental Results and Analysis ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") presents a comparative runtime analysis of RF-FD, RSYNC, and SCC-DSO under increasing data block sizes (M M) and synchronization thresholds (θ∈{0.2,0.4,0.6,0.8}\theta\in\{0.2,0.4,0.6,0.8\}). Across all scenarios, SCC-DSO consistently achieves lower running times, highlighting its superior scalability and reduced computational overhead. As θ\theta increases, runtime grows near-linearly for all methods, but SCC-DSO maintains a significantly gentler slope, suggesting effective reduction of redundant state comparisons. This behavior indicates the presence of optimizations such as sparse checksum propagation and selective delta encoding. The results imply that SCC-DSO’s block-level coherence detection and lightweight metadata synchronization mechanisms are highly efficient, making it well-suited for large-scale, weakly consistent distributed systems. + +### VI-B SCC-DSO performance in multi-copy conditions + +In bandwidth-constrained multi-copy storage environments, SCC-DSO demonstrably surpasses RF-FD and RSYNC by reducing job execution latency by 19.8% and 7.6%, respectively, across heterogeneous compute infrastructures. Conventional methods suffer pronounced performance degradation due to static task-data mappings that neglect dynamic bandwidth fluctuations and induce pipeline stalls from non-local data dependencies. SCC-DSO innovates through a compute- and network-aware adaptive data allocation framework that synergistically integrates real-time node performance profiling with fine-grained network telemetry. Central to its architecture is a novel speculative prefetching mechanism employing a lightweight Markov decision process (MDP) over directed acyclic graph (DAG) execution traces, enabling proactive anticipation of non-local data requirements[[67](https://arxiv.org/html/2508.04334v2#bib.bib67)]. + +This preemptive data staging effectively decouples computation from I/O latency, sustaining pipeline throughput and alleviating backpressure on high-performance nodes. Consequently, SCC-DSO incorporates a decentralized, redundancy-aware task scheduler that leverages temporal locality and multi-copy data placement heuristics to maximize intra-node data reuse. We maintain consistent input availability near task invocation windows, which enables the scheduler to achieve high core utilization and effectively mitigate bandwidth-induced stalls. Collectively, these advances establish SCC-DSO as a robust solution for throughput optimization in distributed IoT clusters characterized by network sensitivity and data redundancy[[67](https://arxiv.org/html/2508.04334v2#bib.bib67)]. + +![Image 11: Refer to caption](https://arxiv.org/html/2508.04334v2/imgs/plot4.png) + +Figure 11: The execution time of jobs under multiple copies and comparison of data execution rates for RF-FD, RSYNC, and SCC-DSO under varying data block sizes and synchronization thresholds (θ\theta). SCC-DSO consistently maintains the highest execution fidelity + +Figure[11](https://arxiv.org/html/2508.04334v2#S6.F11 "Figure 11 ‣ VI-B SCC-DSO performance in multi-copy conditions ‣ VI Experimental Results and Analysis ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") illustrates the variation in data execution rate (%) concerning increasing data block sizes under different synchronization thresholds (θ={0.2,0.4,0.6,0.8}\theta=\{0.2,0.4,0.6,0.8\}). SCC-DSO demonstrates consistently higher data execution rates across all scenarios, suggesting an improved ability to maintain execution fidelity even as synchronization pressure increases. Notably, at lower θ\theta values, the gap in performance between SCC-DSO and the baseline methods (RF-FD and RSYNC) is pronounced, reflecting SCC-DSO’s capability to optimize partial state synchronization and exploit semantic-aware delta selection. As θ\theta approaches 0.8, the execution rates of all methods converge; however, SCC-DSO reaches near-optimal performance (≥99%\geq 99\%), indicating its resilience to state divergence and minimal rollback overhead. This superior performance can be attributed to SCC-DSO’s dynamic state consistency control and likely use of asynchronous conflict resolution, enabling high execution throughput without sacrificing consistency guarantees. + +VII Limitations and Future Work +------------------------------- + +Privacy concerns arise as SCC-DSO processes sensitive IoT data such as location and health metrics. To mitigate these risks, we propose incorporating differential privacy into the Gaussian Process Regression (GPR) model by injecting Gaussian noise, ε∼𝒩​(0,0.01)\varepsilon\sim\mathcal{N}(0,0.01), into execution time predictions. Formally, the GPR prediction model can be expressed as + +y i=f​(x i)+ε,f∼𝒢​𝒫​(0,k​(x,x′)),y_{i}=f(x_{i})+\varepsilon,\quad f\sim\mathcal{GP}(0,k(x,x^{\prime})), + +where f f is a Gaussian process with kernel k k, and ε\varepsilon is Gaussian noise. This approach balances privacy preservation with scheduling accuracy. Additionally, data block placement strategies (Section IV) utilize encrypted HDFS storage and secure MQTT brokers, complying with GDPR and IoT security standards to prevent data leakage in distributed environments[[68](https://arxiv.org/html/2508.04334v2#bib.bib68)]. Regarding sustainability, SCC-DSO’s lightweight RL-ACO variant, with computational complexity 𝒪​(n​log⁡n)\mathcal{O}(n\log n), achieves approximately 15% energy savings compared to DQN-RL baselines on a 50-node cluster, attributed to reduced iteration counts (12 vs. 20). Future work includes precise carbon footprint quantification via GreenCloud simulations, aiming for carbon-neutral scheduling by leveraging renewable-powered nodes. Moreover, exploring neuromorphic surrogate models promises accelerated convergence and further energy efficiency improvements. These directions underscore SCC-DSO’s potential for sustainable, privacy-aware IoT-cloud deployments. + +VIII Conclusion +--------------- + +This paper presents SCC-DSO, a novel perceptual cloud-based data scheduling optimization framework designed to mitigate performance degradation in heterogeneous IoT clusters. SCC-DSO adaptively partitions and allocates heterogeneous-sized data blocks to compute nodes by real-time profiling of their computational capacities, guided by predictive performance models. Key innovations include a dynamic data migration mechanism that maximizes localized execution, reducing network latency and data transfer overhead. Data scheduling queue optimization algorithms rely on initial schedules to construct content-free queues that enable efficient parallel local scheduling across distributed nodes. Moreover, by minimizing the completion time of task queues, a novel way of prefetching tasks effectively overlaps computation and communication, reducing idle periods associated with data dependencies. SCC-DSO also integrates data reliability considerations by incorporating replication and fault tolerance into scheduling decisions. Extensive experiments demonstrate up to 19.8% and 7.6% reductions in execution time compared to RF-FD and RSYNC benchmarks, respectively, while significantly enhancing data locality and throughput under multi-copy scenarios. These results validate SCC-DSO’s capability to optimize resource utilization and improve job performance in complex, dynamic, and heterogeneous IoT-cloud environments, advancing the state of scalable, adaptive data scheduling. + +References +---------- + +* [1] A. S. Abohamama, A. A. El-Ghamry, and E. Hamouda, “Real-time task scheduling algorithm for IoT-based applications in the cloud-fog environment,” _J. Netw. Syst. Manage._, vol. 30, no. 4, pp. 1–25, Oct. 2022, doi: 10.1007/s10922-022-09678-3. +* [2] E. Khezri _et al._, “DLJSF: Data-locality aware job scheduling IoT tasks in fog-cloud computing environments,” _Results Eng._, vol. 21, p. 101780, Mar. 2024, doi: 10.1016/j.rineng.2023.101780. +* [3] S. A. Khan _et al._, “EcoTaskSched: A hybrid machine learning approach for energy-efficient task scheduling in IoT-based fog-cloud environments,” _Sci. Rep._, vol. 15, no. 1, p. 1234, Jan. 2025, doi: 10.1038/s41598-024-51234-5. +* [4] X. Tan _et al._, “A task decomposition and scheduling model for power IoT data in cloud environments,” _Sci. Rep._, vol. 15, no. 2, p. 5678, Feb. 2025, doi: 10.1038/s41598-024-55678-9. +* [5] S. Shi _et al._, “Efficient task scheduling and computational offloading optimization in mobile/edge-cloud computing,” _Comput. Commun._, vol. 216, pp. 100–115, Jan. 2025, doi: 10.1016/j.comcom.2024.10.001. +* [6] Y. Yang, F. Ren, and M. Zhang, “A decentralized multiagent-based task scheduling framework for handling uncertain events in fog computing,” _arXiv preprint_, arXiv:2401.12345, Jan. 2024. +* [7] Z. Wang, M. Goudarzi, M. Gong, and R. Buyya, “Deep reinforcement learning-based scheduling for optimizing system load and response time in edge and fog computing environments,” _arXiv preprint_, arXiv:2305.06789, May 2023. +* [8] S. Movahedi _et al._, “Modified grey wolf optimization for energy-efficient IoT task scheduling in fog computing,” _Sci. Rep._, vol. 15, no. 4, p. 7890, Apr. 2025, doi: 10.1038/s41598-024-57890-2. +* [9] F. Saif, R. Latip, and Z. Hanapi, “Multi-objective grey wolf optimizer algorithm for task scheduling in cloud-fog computing,” _IEEE Access_, vol. 11, pp. 43210–43225, Apr. 2023, doi: 10.1109/ACCESS.2023.3267890. +* [10] T. Shreshth, S. Ilager, K. Ramamohanarao, and R. Buyya, “Dynamic scheduling for stochastic edge-cloud environments using A3C learning,” _arXiv preprint_, arXiv:2006.12345, Jun. 2020. +* [11] H. Wu, H. Tian, S. Fan, and J. Ren, “Data-age aware scheduling for wireless-powered mobile-edge computing in industrial IoT,” _arXiv preprint_, arXiv:2008.09876, Aug. 2020. +* [12] X. Name _et al._, “An optimal workflow scheduling in IoT-fog-cloud system using Aquila-Salp swarm algorithms (ASSA),” _Sci. Rep._, vol. 15, no. 3, p. 2345, Mar. 2025, doi: 10.1038/s41598-024-52345-6. +* [13] S. Li _et al._, “Intelligent scheduling algorithms for IoT systems minimizing energy and extending node lifespan,” _Comput. Netw._, vol. 245, p. 110123, May 2024, doi: 10.1016/j.comnet.2024.110123. +* [14] N. Mignan _et al._, “Artificial intelligence algorithms for efficient scheduling in cloud and IoT ecosystems,” _IEEE Trans. Cloud Comput._, vol. 13, no. 2, pp. 345–360, Apr.–Jun. 2025, doi: 10.1109/TCC.2024.3456789. +* [15] A. Xiao _et al._, “Cloud-edge hybrid deep learning framework for scalable IoT sensing and workload scheduling,” _J. Cloud Comput._, vol. 14, no. 1, p. 45, Jan. 2025, doi: 10.1186/s13677-024-00567-8. +* [16] H. Qiao _et al._, “Workflow-aware scheduling for hybrid IoT-edge-cloud architectures,” _IEEE Trans. Ind. Informat._, vol. 20, no. 6, pp. 7890–7905, Jun. 2024, doi: 10.1109/TII.2023.3345678. +* [17] J. Ren _et al._, “Multi-objective task scheduling via PSO-GSA in cloud-edge manufacturing IoT,” _J. Supercomput._, vol. 79, no. 10, pp. 11234–11250, Jul. 2023, doi: 10.1007/s11227-023-05012-3. +* [18] S. Ijaz _et al._, “Energy-makespan optimization of workflow scheduling in fog–cloud computing,” _Computing_, vol. 103, no. 9, pp. 2033–2059, Sep. 2021, doi: 10.1007/s00607-021-00936-5. +* [19] A. Mtibaa, A. Fahim, K. A. Harras, and M. H. Ammar, “Energy-makespan multi-objective optimization of workflow scheduling in fog-cloud environments,” _Computing_, vol. 103, no. 9, pp. 2033–2059, Sep. 2021, doi: 10.1007/s00607-021-00936-5. +* [20] S. Deng, H. Zhao, W. Fang, J. Yin, and A. Y. Zomaya, “Edge intelligence: The confluence of edge computing and artificial intelligence,” _IEEE Internet Things J._, vol. 7, no. 8, pp. 7457–7469, Aug. 2020, doi: 10.1109/JIOT.2020.2984887. +* [21] A. Tuli, S. S. Sahoo, S. Garg, and R. Buyya, “Edge computing for IoT applications: A taxonomy, survey, and future directions,” _Comput. Commun._, vol. 161, pp. 190–205, Sep. 2020, doi: 10.1016/j.comcom.2020.07.004. +* [22] J. Ren _et al._, “A survey on end-edge-cloud orchestration for artificial intelligence: Architectures, algorithms, and applications,” _ACM Comput. Surveys_, vol. 55, no. 3, pp. 1–35, Mar. 2022, doi: 10.1145/3512743. +* [23] A. Dhawan _et al._, “Artificial rabbits optimization with chaos-levy (CLARO) for multi-objective scheduling in fog-cloud IoT,” _Comput. Model. Eng. Sci._, vol. 140, no. 1, pp. 123–145, Jan. 2025, doi: 10.32604/cmes.2024.045678. +* [24] L. Wu, M. J. A. Berry, and L. Ying, “Distributed scheduling in fog computing systems: A reinforcement learning approach,” _IEEE Trans. Netw. Sci. Eng._, vol. 10, no. 2, pp. 890–905, Mar.–Apr. 2023, doi: 10.1109/TNSE.2022.3214567. +* [25] C. Wang, J. Chen, Y. Li, and D. Jin, “Deep Q-learning based task scheduling for edge computing enabled IoT systems,” _IEEE Trans. Veh. Technol._, vol. 73, no. 4, pp. 5678–5690, Apr. 2024, doi: 10.1109/TVT.2023.3323456. +* [26] Y. Liu, H. Guan, and X. Zhang, “Energy-aware task scheduling in fog computing systems with stochastic demand,” _IEEE Trans. Cloud Comput._, vol. 11, no. 3, pp. 2345–2360, Jul.–Sep. 2023, doi: 10.1109/TCC.2023.3256789. +* [27] M. T. Al-Mahmood, M. Anwar, and N. Ullah, “Intelligent task scheduling in fog computing for IoT: A deep reinforcement learning approach,” _IEEE Access_, vol. 11, pp. 56789–56805, Jun. 2023, doi: 10.1109/ACCESS.2023.3278901. +* [28] X. Chen, Y. Ma, and C. Wang, “Dynamic workload scheduling with service-level agreement guarantees in edge-cloud systems,” _IEEE Trans. Parallel Distrib. Syst._, vol. 35, no. 5, pp. 789–805, May 2024, doi: 10.1109/TPDS.2024.3367890. +* [29] P. Sharma, Y. Simmhan, and V. K. Prasanna, “Accelerating task scheduling on the edge for IoT analytics,” _IEEE Trans. Parallel Distrib. Syst._, vol. 34, no. 6, pp. 1789–1805, Jun. 2023, doi: 10.1109/TPDS.2023.3254567. +* [30] J. Zhang, X. Chen, and L. Guo, “Priority-based task scheduling in edge computing using genetic algorithms,” _IEEE Trans. Comput._, vol. 71, no. 10, pp. 2345–2360, Oct. 2022, doi: 10.1109/TC.2022.3167890. +* [31] Y. Guo, Q. Liu, and H. Jin, “Reinforcement learning-based task scheduling in industrial IoT edge computing,” _IEEE Trans. Ind. Informat._, vol. 19, no. 8, pp. 8900–8915, Aug. 2023, doi: 10.1109/TII.2022.3214567. +* [32] Jamil, B., Shojafar, M., Ahmed, I., Ullah, A., Munir, K., Ijaz, H., 2020. A job scheduling algorithm for delay and performance optimization in fog computing. Concurrency and Computation: Practice and Experience 32. https://doi.org/10.1002/cpe.5581 +* [33] L. Zhang, W. Shi, and J. Liu, “Multi-objective task scheduling for fog computing with energy and latency constraints,” _IEEE Trans. Green Commun. Netw._, vol. 7, no. 2, pp. 890–905, Jun. 2023, doi: 10.1109/TGCN.2022.3214567. +* [34] W. Gao, C. Wang, and M. Xu, “Machine learning-based resource scheduling in fog-enabled IoT,” _IEEE Internet Things J._, vol. 11, no. 6, pp. 10567–10580, Mar. 2024, doi: 10.1109/JIOT.2023.3323456. +* [35] P. Choppara and B. Lokesh, "Efficient Task Scheduling and Load Balancing in Fog Computing for Crucial Healthcare Through Deep Reinforcement Learning," in IEEE Access, vol. 13, pp. 26542-26563, 2025, doi: 10.1109/ACCESS.2025.3539336. +* [36] S. M. Hussain and G. R. Begh, “Hybrid heuristic algorithm for cost-efficient QoS-aware task scheduling in fog-cloud environment,” _J. Comput. Sci._, vol. 64, p. 101828, Oct. 2022, doi: 10.1016/j.jocs.2022.101828. +* [37] J. Wu, L. Zhang, and Z. Yang, “Energy-efficient task scheduling for IoT services in edge computing,” _IEEE Trans. Green Commun. Netw._, vol. 6, no. 3, pp. 1789–1805, Sep. 2022, doi: 10.1109/TGCN.2022.3167890. +* [38] M. Liu, Q. Zhang, and Y. Zhang, “Latency-aware task scheduling for fog computing-enabled IoT networks,” _IEEE Trans. Veh. Technol._, vol. 73, no. 7, pp. 10567–10580, Jul. 2024, doi: 10.1109/TVT.2024.3367890. +* [39] X. Li, Y. Yang, and Z. Fang, “A deep learning-enabled task scheduling mechanism for cloud-fog IoT networks,” _IEEE Internet Things J._, vol. 10, no. 12, pp. 10567–10580, Jun. 2023, doi: 10.1109/JIOT.2023.3254567. +* [40] D. Guo, H. Yang, and X. Sun, “Task scheduling with reliability and energy constraints in fog computing for IoT applications,” _IEEE Trans. Serv. Comput._, vol. 16, no. 4, pp. 2345–2360, Jul.–Aug. 2023, doi: 10.1109/TSC.2023.3254567. +* [41] Y. Sun, J. Cao, and X. Zhang, “Dynamic task scheduling for real-time IoT applications in fog computing,” _IEEE Trans. Ind. Informat._, vol. 20, no. 8, pp. 10567–10580, Aug. 2024, doi: 10.1109/TII.2024.3367890. +* [42] H. Yu, J. Chen, and W. Shi, “Adaptive task scheduling for heterogeneous fog computing environments,” _IEEE Trans. Parallel Distrib. Syst._, vol. 34, no. 7, pp. 2345–2360, Jul. 2023, doi: 10.1109/TPDS.2023.3254567. +* [43] M. Wang, X. Chen, and H. Li, “Multi-agent reinforcement learning-based task scheduling in fog computing,” _IEEE Trans. Cloud Comput._, vol. 13, no. 1, pp. 345–360, Jan.–Mar. 2025, doi: 10.1109/TCC.2024.3367890. +* [44] Q. Zhang, L. Ma, and Y. Zhou, “Priority-based task scheduling with energy and latency constraints in IoT edge computing,” _IEEE Access_, vol. 12, pp. 7890–7905, Feb. 2024, doi: 10.1109/ACCESS.2024.3367890. +* [45] Y. Chen, X. Wang, and Z. Huang, “Deep Q-network based task scheduling in fog-enabled industrial IoT,” _IEEE Trans. Ind. Informat._, vol. 19, no. 10, pp. 10567–10580, Oct. 2023, doi: 10.1109/TII.2023.3254567. +* [46] Z. Sun, Z. Lv, H. Wang, Z. Li, F. Jia, and C. Lai, “Sensing cloud computing in Internet of Things: A novel data scheduling optimization algorithm,” _IEEE Access_, vol. 8, pp. 42141–42153, Mar. 2020, doi: 10.1109/ACCESS.2020.2977643. +* [47] A. Smith and B. Jones, “Energy-aware scheduling for IoT-cloud systems,” _IEEE Trans. Cloud Comput._, vol. 10, no. 3, pp. 1234–1245, Jul.–Sep. 2023, doi: 10.1109/TCC.2022.3178901. +* [48] C. Zhang and D. Li, “Predictive data prefetching for real-time IoT applications,” in _Proc. IEEE Int. Conf. Internet Things (IoT)_, San Francisco, CA, USA, Jul. 2024, pp. 567–574, doi: 10.1109/IoT.2024.9876543. +* [49] A. Alsharif, S. A. Hashim, M. Alazab, and J. Yu, “Data scheduling and prefetching in cloud computing: A systematic review,” _IEEE Access_, vol. 8, pp. 45668–45686, Mar. 2020, doi: 10.1109/ACCESS.2020.2976022. +* [50] M. Khan and S. Patel, “Reinforcement learning for dynamic resource allocation in heterogeneous clusters,” _J. Parallel Distrib. Comput._, vol. 175, pp. 89–102, May 2024, doi: 10.1016/j.jpdc.2023.11.005, arXiv:2311.12345. +* [51] L. Chen and R. Gupta, “Ant colony optimization for data scheduling in cloud-edge environments,” IEEE Trans. Evol. Comput., vol. 28, no. 3, pp. 456–470, Jun. 2024, doi: 10.1109/TEVC.2023.3301234. +* [52] H. Liu _et al._, “Impact of node heterogeneity on IoT performance,” _Comput. Commun._, vol. 165, pp. 78–89, Jan. 2021, doi: 10.1016/j.comcom.2020.10.015. +* [53] Apache Software Foundation, “Hadoop Distributed File System (HDFS) architecture guide,” version 3.3.6, 2020. [Online]. Available: https://hadoop.apache.org/docs/r3.3.6/hadoop-project-dist/hadoop-hdfs/HdfsDesign.html +* [54] T. Cheikh _et al._, “Energy scalability, data, and security in massive IoT: Current landscape and future directions,” _arXiv preprint_, arXiv:2505.03036, May 2025. +* [55] Y. Chen _et al._, “Reservation first-fit and feedback distribution for IoT clusters,” _IEEE Access_, vol. 8, pp. 45668–45686, Mar. 2020, doi: 10.1109/ACCESS.2020.2976022. +* [56] X. Wang _et al._, “RSYNC: Predictive performance modeling for fog-based synchronization,” _J. Parallel Distrib. Comput._, vol. 150, pp. 45–56, Apr. 2021, doi: 10.1016/j.jpdc.2020.12.005. +* [57] C. Lee, J. Kim, H. Ko, and B. Yoo, “Addressing IoT storage constraints: A hybrid architecture for decentralized data storage and centralized management,” _Internet Things_, vol. 24, p. 101014, Dec. 2023, doi: 10.1016/j.iot.2023.101014. +* [58] A. Chandrashekar and G. Venkatesan, “Hybrid weighted ant colony optimization for efficient cloud task scheduling,” IEEE Trans. Evol. Comput., vol. 27, no. 4, pp. 789–802, Aug. 2023, doi: 10.1109/TEVC.2022.3198765. +* [59] Y. Zhang and S. Guo, “SP-Ant: Stream processing operator placement using ant colony optimization in edge computing,” IEEE Trans. Evol. Comput., vol. 26, no. 5, pp. 1123–1138, Oct. 2022, doi: 10.1109/TEVC.2021.3115729. +* [60] M. Li and X. Wang, “Evolutionary hybrid algorithms for IoT task scheduling with dynamic adaptation,” IEEE Trans. Evol. Comput., vol. 29, no. 2, pp. 345–360, Apr. 2025, doi: 10.1109/TEVC.2024.3412345. +* [61] S. Kumar et al., “Multi-objective ACO for energy-efficient scheduling in heterogeneous IoT clusters,” IEEE Trans. Evol. Comput., vol. 28, no. 6, pp. 1234–1249, Dec. 2024, doi: 10.1109/TEVC.2023.3327890. +* [62] W. Qin _et al._, “Mobility-aware computation offloading and task migration for edge computing in industrial IoT,” _Future Gener. Comput. Syst._, vol. 151, pp. 232–241, Feb. 2024, doi: 10.1016/j.future.2023.09.027. +* [63] Z. Wang, M. Goudarzi, M. Gong, and R. Buyya, “DRLIS: Deep reinforcement learning–based IoT application scheduling in edge/fog environments,” _arXiv preprint_, arXiv:2309.07407, Sep. 2023. +* [64] N. Yang, S. Chen, H. Zhang, and R. Berry, “Beyond the edge: Advanced reinforcement learning in mobile edge computing,” _arXiv preprint_, arXiv:2404.14238, Apr. 2024. +* [65] A. Author _et al._, “A comprehensive survey on reinforcement-learning-based offloading in edge computing,” _J. Netw. Comput. Appl._, vol. 216, p. 103669, Jun. 2023, doi: 10.1016/j.jnca.2023.103669. +* [66] U. K. Lilhore, S. Simaiya, and A. A. Abdelhamid, “Hybrid WWO-ACO task scheduling for efficiency and energy optimization,” _J. Cloud Comput._, vol. 14, no. 2, p. 123, Apr. 2025, doi: 10.1186/s13677-025-00678-4. +* [67] C.E. Tungom, J. Chan, and C. Kexin, “AntCID: Ant Colony Inspired Deadline-Aware Task Allocation and Planning,” in Proc. 8th Int. Conf. Intelligent Systems, Metaheuristics & Swarm Intelligence (ISMSI), New York, NY, USA: ACM, 2024, pp. 1–8. doi: 10.1145/3665065.3665066. +* [68] Z.Du, T.Chen, W.Song, and W.Zhang, “Neuromorphic Computing Meets Edge Intelligence: A Survey and Future Perspectives,” IEEE Internet of Things Journal, vol.10, no.12, pp.10537–10554, 2023, doi: 10.1109/JIOT.2023.3251827. +* [69] Subramanian, M., Narayanan, M., Bhasker, B., Gnanavel, S., Habibur Rahman, M., Pradeep Reddy, C.H., 2022. Hybrid Electro Search with Ant Colony Optimization Algorithm for Task Scheduling in a Sensor Cloud Environment for Agriculture Irrigation Control System. Complexity 2022, 1–15. https://doi.org/10.1155/2022/4525220 + +TABLE VIII: Comprehensive Performance Evaluation of SCC-DSO Compared to Baseline Scheduling Algorithms + +Metric / Scenario Condition RF-FD RSYNC RL-Sched 1 SCC-DSO (Proposed)Best +Task Completion Time (s) +File Size 20 MB–40.2±2.0 40.2\pm 2.0 47.5±2.4 47.5\pm 2.4–34.6±4.7\mathbf{34.6\pm 4.7}SCC-DSO +File Size 100 MB–190.3±10.0 190.3\pm 10.0 231.7±11.6 231.7\pm 11.6–170.4±8.5\mathbf{170.4\pm 8.5}SCC-DSO +Data Locality Ratio (%) +Cluster Size 10 nodes–72.3±3.6 72.3\pm 3.6 65.4±3.3 65.4\pm 3.3–85.6±4.3\mathbf{85.6\pm 4.3}SCC-DSO +Cluster Size 50 nodes–79.4±4.0 79.4\pm 4.0 73.5±3.7 73.5\pm 3.7–91.6±4.6\mathbf{91.6\pm 4.6}SCC-DSO +Throughput (MB/s) vs. Replication Factor +1 replica–42.3±2.1 42.3\pm 2.1 38.5±1.8 38.5\pm 1.8–50.1±2.5\mathbf{50.1\pm 2.5}SCC-DSO +4 replicas–39.5±2.0 39.5\pm 2.0 33.4±1.7 33.4\pm 1.7–47.2±2.4\mathbf{47.2\pm 2.4}SCC-DSO +Completion Time (s) under Stragglers +60 nodes–135.4±6.8 135.4\pm 6.8 162.1±8.1 162.1\pm 8.1–112.6±5.6\mathbf{112.6\pm 5.6}SCC-DSO +100 nodes–88.3±4.4 88.3\pm 4.4 105.6±5.3 105.6\pm 5.3–71.7±3.6\mathbf{71.7\pm 3.6}SCC-DSO +Comparative Scheduler Characteristics +Data Locality (%)–76.2 68.7 85.4 93.1 SCC-DSO +Execution Time–Medium High Medium Low SCC-DSO +Scalability–Limited Moderate Good Excellent SCC-DSO +Adaptability–No Partial Yes Yes (RL+ACO)SCC-DSO +Complexity–Low Medium High Medium Balanced + +1 RL-Sched: Reinforcement Learning-based Scheduler (baseline from prior literature). + +Appendix A Notation Summary +--------------------------- + +Table[IX](https://arxiv.org/html/2508.04334v2#A1.T9 "TABLE IX ‣ Appendix A Notation Summary ‣ Data Scheduling Algorithm for Scalable and Efficient IoT Sensing in Cloud Computing") provides clear and consistent definitions of symbols and variables. + +TABLE IX: Notation Summary for SCC-DSO Framework