An ALF data set is a logical set of data buffers. A data set informs the ALF runtime about the set of all data to which the task's work blocks refer. The ALF runtime uses this information to optimize how data is moved from the host's memory to the accelerator's memory and back.
You set up a data set independently of tasks or work blocks using the alf_dataset_create, and alf_dataset_buffer_add functions. Before enqueuing the first work block, you must associate the data set to one or more tasks using the alf_task_dataset_associate function. As work blocks are enqueued, they are checked against the associated data set to ensure they reside within one of the buffers. Finally after finishing with the data set, you destroy it by using the alf_dataset_destroy function.
A data set can have a set of data buffers associated with it. A data buffer can be identified as read-only, write-only, or read and write. You can add as many data buffers to the data set as needed. Different ALF implementations can choose to limit the number of data buffers in a specific data set. Refer to the implementation documentation for restriction information about the number of data buffers in a data set. However, after a data set has been associated with a task, you cannot add additional data buffers to the data set.
A task can optionally be associated with one and only one data set. Work blocks within this task refer to data within the data set for input, output, and in-out buffers. References to work block input and output data which is outside of the data set result in an error. The task context buffer and work block parameter buffer do not need to reside within the data set and are not checked against it.
Multiple tasks can share the same data set. It is your responsibility to make sure that the data in the data set is used correctly. If two tasks with no dependency on each other use the same data from the same data set, ALF cannot guarantee the consistency of the data. For tasks with a dependency on each other and which use the same data set, the data set gets updated in the order in which the tasks are run.
Although for host data partitioning you may create and use data sets, it is recommended that you do use data sets. For accelerator data partitioning you must create and use data sets.