Experience the Pathfinder bug
Introduction
In the various codelabs dealing with scheduling, you learned how to schedule a hybrid set of tasks, made of both periodic and aperiodic tasks. In these codelabs, tasks were assumed to be independent and no task could be blocked by any other task.
In this codelab, we address the problem of dependencies among tasks. In multi-tasking programs, tasks often need to share resources and shared resources must be protected against concurrent access. Such protections are mostly implemented using mutexes and this is not different with Keil RTX5.
In this codelab, you will also experience a situation similar to the one that happened on the Mars Rover Pathfinder (see image below) in 1997. Although you will experience it in a much more comfortable seat than the engineers back in 1997, this example should help to illustrate the need for implementing resource access protocols such as PIP in Keil RTX5.
Source: https://upload.wikimedia.org/wikipedia/commons/7/7f/Mars_Pathfinder_Lander_preparations.jpg
What you’ll build
In this codelab, you will build an application with a number of tasks with dependencies. You will create critical sections accessed concurrently by several tasks and understand how access to those critical sections impact the programs and may lead to unexpected behaviors.
What you’ll learn
- How to use Keil RTX5 mutexes.
- How to create protect critical sections with properly configured mutexes.
What you’ll need
- You need to have finished the Digging into RTX.
- You need to have completed the Scheduling of periodic tasks codelab and Scheduling of a hybrid set of tasks codelabs.
- You need to have completed the Robust Development Methodologies (I) codelab and the Robust Development Methodologies (II) codelab.
Scheduling of periodic tasks
Based on the Scheduling of periodic tasks codelab, create a program with 3 tasks defined as
Task | WCET [ms] | Period [ms] |
---|---|---|
Task 1 | 10 | 50 |
Task 2 | 20 | 75 |
Task 3 | 70 | 150 |
Under RM, the schedule for the tasks is
You should observe a similar schedule when running your application:
Make the tasks dependent from each other
For creating dependencies among tasks, we simply add critical sections protected by mutexes. A mutex under Keil RTX5 may be created as:
const osMutexAttr_t Mutex_A_attr = {
"MutexA", // human readable mutex name
osMutexRecursive | osMutexPrioInherit, // attr_bits
NULL, // memory for control block
0U // size for control block
};
osMutexId_t mutex_A = osMutexNew(&Mutex_A_attr);
if (mutex_A == NULL) {
app_error_handler(CANNOT_CREATE_MUTEX);
}
For testing different dependencies scenarios with the same code, you may use the following code structure:
struct compute_info_t {
uint32_t computation_time;
osMutexId_t* mutex_at_entry;
osMutexId_t* mutex_at_exit;
};
struct task_info_t {
uint32_t period;
struct compute_info_t compute_info[5];
};
with the following task()
function:
void task(void* arg) {
// cppcheck-suppress misra-c2012-11.5
// rationale: void* being the one solution in c that
// allows generic interfaces - albeit not type safe
struct task_info_t task_info = (struct task_info_t) * ((struct task_info_t*)arg);
const uint8_t nbr_of_subtasks = sizeof(task_info.compute_info) / sizeof(task_info.compute_info[0]);
if (START_DELAY > 0U) {
osDelayUntil(START_DELAY);
}
uint32_t next_execution_in_ms = START_DELAY;
for (;;) {
// wait for the computation time
for (uint8_t subtask_index = 0; subtask_index < nbr_of_subtasks; subtask_index++) {
if (task_info.compute_info[subtask_index].mutex_at_entry != NULL) {
osStatus_t status = osMutexAcquire(*task_info.compute_info[subtask_index].mutex_at_entry, osWaitForever);
if (status != osOK) {
app_error_handler(ERROR_ACQUIRE_MUTEX);
}
}
busy_wait_ms(task_info.compute_info[subtask_index].computation_time);
if (task_info.compute_info[subtask_index].mutex_at_exit != NULL) {
osStatus_t status = osMutexRelease(*task_info.compute_info[subtask_index].mutex_at_exit);
if (status != osOK) {
app_error_handler(ERROR_ACQUIRE_MUTEX);
}
}
}
// Calculate next period start time
next_execution_in_ms += task_info.period;
// wait until next execution
osDelayUntil(next_execution_in_ms);
}
}
This function performs a task with portions that can be protected by one or more
critical sections. For reproducing the scenario of independent periodic tasks, you may declare a task_infos
variable as
static const struct task_info_t task_infos[] = { { .period = 50U, .compute_info = { { .computation_time = 10U, .mutex_at_entry = NULL, .mutex_at_exit = NULL } } },
{ .period = 75U, .compute_info = { { .computation_time = 20U, .mutex_at_entry = NULL, .mutex_at_exit = NULL } } },
{ .period = 150U, .compute_info = { { .computation_time = 70U, .mutex_at_entry = NULL, .mutex_at_exit = NULL } } } };
and create your tasks using a code such as
const uint8_t nbr_of_tasks = sizeof(th_atts) / sizeof(th_atts[0]);
for (uint8_t task_index = 0; task_index < nbr_of_tasks; task_index++) {
osThreadId_t tid_thr_task = osThreadNew(task, (void*)&task_infos[task_index], &th_atts[task_index]);
if (tid_thr_task == NULL) {
app_error_handler(CANNOT_CREATE_THREAD);
}
}
}
Demonstrate the priority inversion phenomenon
For demonstrating the priority inversion phenomenon, one must first disable the
Priority Inheritance mechanism implemented in Keil RTX5 mutexes. This can be done
by removing the osMutexPrioInherit
attribute when creating the mutex:
const osMutexAttr_t Mutex_A_attr = {
"MutexA", // human readable mutex name
osMutexRecursive, // attr_bits
NULL, // memory for control block
0U // size for control block
};
For reproducing the following scenario in our application:
we need to create the following task_infos
variable:
static const struct task_info_t task_infos[] = {
{ .period = 50U, .compute_info = { { .computation_time = 5U, .mutex_at_entry = NULL, .mutex_at_exit = NULL },
{ .computation_time = 2U, .mutex_at_entry = &mutex_A, .mutex_at_exit = &mutex_A },
{ .computation_time = 3U, .mutex_at_entry = NULL, .mutex_at_exit = NULL } } },
{ .period = 75U, .compute_info = { { .computation_time = 20U, .mutex_at_entry = NULL, .mutex_at_exit = NULL } } },
{ .period = 150U, .compute_info = { {.computation_time = 10U, .mutex_at_entry = NULL, .mutex_at_exit = NULL },
{.computation_time = 40U, .mutex_at_entry = &mutex_A, .mutex_at_exit = &mutex_A },
{.computation_time = 20U, .mutex_at_entry = NULL, .mutex_at_exit = NULL } } } };
With this configuration of tasks and without priority inheritance protocol for
the mutex, you should observe a similar schedule when running your application:
As you can see in the picture above, a violation is detected for the lateness of
task 1. This error is created by the osDelayUntil()
function which is late.
If you execute the same task configuration but with priority inheritance for the mutex, you should then observe a schedule similar to:
As you can observe in this diagram, priority inheritance allows to meet all deadlines.
Demonstrate the schedulability under RM and PIP
For demonstrating the schedulability of the set of tasks, you first need to compute the upper bound of the blocking time. Then, you may use the obtained \(B_{i}\) values and apply the schedulability test.