huawei-hackathon-2021
Contributors
Challenge
Requirements:
- python=3.8.10
- Standard libraries (no importing)
Important factors:
Data dependency between tasks for a Directed Acyclic Graph (DAG).
Task waits until parent tasks finished and data generated by parent reaches current task.
Communication time: The time which takes to send the parents’ data to their children, if they are located on different processing nodes; otherwise it can be assumed negligible. As a result, we prefer to assign communicating tasks on the same processing node.
Assign tasks on the same processing node where possible; if not, make data transfers from parent -> children as fast as possible.
Affinity: It refers to the affinity of a task to its previous instances running on the same processing node that can reduce overhead to initialize the task, such as a lower Instruction Cache Miss. Ideally the task is better to run on the same processing node where its previous instance was recently run.
Reuse processing nodes where possible. I.e. run children tasks on parent node.
Load Balancing of processing nodes: The CPU utilization of processing nodes should be balanced and uniformed.
Self explanitory.
Assumptions
- If communicating tasks assigned to the same processing node, the communication time between them is negligible, i.e., equal to 0.
Using same node reduces communication time to 0.
- If the previous instance of the same task is recently assigned to the same processing node, the estimated execution time of the current instance of the task reduces by 10%. For example, if T0 is assigned to PN1, the execution time of the second instance of T0 (denoted by T0’) on PN1 is 9µs, rather than 10µs.
Using same node reduces processing time by 10%. PN1 = Processing Node 1. T0 = Task 0.
- "Recently assigned" can be translated to:
- If the previous instance of the current task is among the last Χ tasks run on the PN.
- For this purpose we need to keep, a history of the X recent tasks which run on each PN.
Log the tasks tracked?
- A DAG’s deadline is relative to its release time which denoted by di . For example, if the deadline of a DAG is 3 and the release time of its ith instance is 12, it should be completed before 15.
- All time units are in microseconds.
- The execution of tasks are non-preemptive, in the sense that a task that starts executing on a processor will not be interrupted by other tasks until its execution is completed.
Tasks cannot run concurrently on the same processor.
Problem Formulation
Consider a real-time app including n DAGs (DAG1, DAG2, ... DAGn) each of which are periodically released with a period Pk . Instances of each DAG is released over the course of the running application. The ith instance of the kth DAG is denoted by Dk(i). The application is run on x homogenous processing nodes (PN1, PN2, ... PNx). The algorithm should find a solution on how to assign the tasks of DAGs to the PNs so that all DAGs deadlines are respected and the makespan of the given application is minimized. Makespan: The time where all instances of DAGs are completed
Questions:
Propose an algorithm to solve the considered problem to maximize the utility function including both the total application Makespan and the standard deviation of the PN utilizations (i.e., how well-uniform is the assignment) such that both task dependency constraints and DAGs deadlines are met.
Utility Function = 1 / (10 * Normalized(Makespan) + STD(PN utilizations))
Normalized(Makespan) = Makespan / Application_worst_case_completion_time
Application_worst_case_completion_time = SUM(execution_times, DAG_communication_times)
Normalized(Makespan) and STD(PN utilizations) are both values [0..1] Algorithm should specify the assignment of tasks to PNs that maximize utility function. Algorithm should specify the order the tasks are scheduled and execution order of tasks for each PN.
I/O
Input
Scheduler input: 12 test cases consisting of a JSON file that includes:
- A set of independent DAGs
- The deadlines for the DAGs
- Number of instances of each DAG
- Period (Pk) of the DAGs
- List of tasks for each DAG
- Execution times for each DAG
- Communication (inter-task) times for each DAG __ --> Number of cores mentioned in each test case <--__
Output
A CSV file including:
- The PN_id of which each task was assigned to (0, 1, ... x)
- Order of execution of the tasks in their assigned PN
- Start and finish time of the task
- Applcation markspan
- The STD of the clusters' utilization (PN utilization?)
- Value of the utility function
- The execution time of the scheduler on our machine.
Note for Python coders: If you code in Python, you need to write your own printer function to create the csv files in the specified format.