Multithreaded Programming Using Grand Central Dispatch for iOS iOS 03.11.2019

Concurrency is the concept of multiple tasks starting, running, and completing within the same time period. This does not necessarily mean that the tasks are executing simultaneously. In fact, in order for tasks to be run simultaneously, our application needs to be running on a multicore or multiprocessor system. Concurrency allows us to share the processor or cores for multiple tasks; however, a single core can only execute one task at a given time.

Parallelism is the concept of two or more tasks running simultaneously. Since each core of our processor can only execute one task at a time, the number of tasks executing simultaneously is limited to the number of cores within our processors and the number of processors that we have. As an example, if we have a four-core processor, then we are limited to running four tasks simultaneously. Today's processors can execute tasks so quickly that it may appear that larger tasks are executing simultaneously. However, within the system, the larger tasks are actually taking turns executing subtasks on the cores.

In order to understand the difference between concurrency and parallelism, let's look at how a juggler juggles balls. If you watch a juggler, it seems they are catching and throwing multiple balls at any given time, however, a closer look reveals that they are, in fact, only catching and throwing one ball at a time. The other balls are in the air waiting to be caught and thrown. If we want to be able to catch and throw multiple balls simultaneously, we need to have multiple jugglers.

This example is really good because we can think of jugglers as the cores of a processer. A system with a single core processor (one juggler), regardless of how it seems, can only execute one task (catch and throw one ball) at a time. If we want to execute more than one task at a time, we need to use a multicore processor (more than one juggler).

Back in the old days when all of the processors were single-core, the only way to have a system that executed tasks simultaneously was to have multiple processors in the system. This also required specialized software to take advantage of the multiple processors. In today's world, just about every device has a processor that has multiple cores, and both iOS and macOS are designed to take advantage of these multiple cores to run tasks simultaneously.

Traditionally, the way applications added concurrency was to create multiple threads; however, this model does not scale well to an arbitrary number of cores. The biggest problem with using threads was that our applications ran on a variety of systems (and processors), and in order to optimize our code, we needed to know how many cores/processors could be efficiently used at a given time, which is usually not known at the time of development.

To solve this problem, many operating systems, including iOS and macOS, started relying on asynchronous functions. These functions are often used to initiate tasks that could possibly take a long time to complete, such as making an HTTP request or writing data to disk. An asynchronous function typically starts the long running task and then returns prior to the task's completion. Usually, this task runs in the background and uses a callback function (such as closure in Swift) when the task completes.

These asynchronous functions work great for the tasks that the OS provides them for, but what if we need to create our own asynchronous functions and do not want to manage the threads ourselves? For this, Apple provides a couple of technologies GCD and operation queues.

GCD is a low-level, C-based API that allows specific tasks to be queued up for execution and schedules the execution on any of the available processor cores. Operation queues are similar to GCD; however, they are Foundation objects and are internally implemented using GCD.

So, a key concept of GCD is the queue. GCD splits tasks into units of work and puts those units into queues for execution. The system manages the queues for us, executing units of work on multiple threads. We don’t need to start or manage the background threads directly, and we are freed from much of the bookkeeping that’s usually involved in implementing multithreaded applications.

Work placed into the queue may either run synchronously or asynchronously. When running a task synchronously, your app will wait and block the current run loop until execution finishes before moving on to the next task. Alternatively, a task that is run asynchronously will start, but return execution to your app immediately. This way, the app is free to run other tasks while the first one is executing.

The queue to which your task is submitted also has a characteristic of being either serial or concurrent. Serial queues only have a single thread associated with them and thus only allow a single task to be executed at any given time. A concurrent queue, on the other hand, is able to utilize as many threads as the system has resources for. Threads will be created and released as necessary on a concurrent queue.

GCD provides what is known as dispatch queues to manage submitted tasks. The queues manage these submitted tasks and execute them in a First-In, First-Out (FIFO) order. This ensures that the tasks are started in the order they were submitted.

A task is simply some work that our application needs to perform. For example, we can create tasks that perform simple calculations, read/write data to disk, make an HTTP request, or anything else that our application needs to do. We define these tasks by placing the code inside either a function or a closure and adding it to a dispatch queue.

GCD provides three types of queues:

  • Serial queues. Tasks in a serial queue (also known as a private queue) are executed one at a time in the order they were submitted. Each task is started only after the preceding task is completed. Serial queues are often used to synchronize access to specific resources because we are guaranteed that no two tasks in a serial queue will ever run simultaneously. Therefore, if the only way to access the specific resource is through the tasks in the serial queue, then no two tasks will attempt to access the resource at the same time or out of order.
  • Concurrent queues. Tasks in a concurrent queue (also known as a global dispatch queue) execute concurrently; however, the tasks are still started in the order that they were added to the queue. The exact number of tasks that can be executed at any given instance is variable and is dependent on the system's current conditions and resources. The decision on when to start a task is up to GCD and is not something that we can control within our application.
  • Main dispatch queue. The main dispatch queue is a globally available serial queue that executes tasks on the application's main thread. Since tasks put into the main dispatch queue run on the main thread, it is usually called from a background queue when some background processing has finished and the user interface needs to be updated.

Dispatch queues offer several advantages over traditional threads.

  • The first and foremost advantage is that, with dispatch queues, the system handles the creation and management of threads rather than the application itself. The system can scale the number of threads dynamically, based on the overall available resources of the system and the current system conditions. This means that dispatch queues can manage the threads with greater efficiency than we could.
  • Another advantage of dispatch queues is that we are able to control the order in which the tasks are started. With serial queues, not only do we control the order in which tasks are started, but we also ensure that one task does not start before the preceding one is complete. With traditional threads, this can be very cumbersome and brittle to implement, but with dispatch queues, as we will see later in this chapter, it is quite easy.

Creating queues

We use the DispatchQueue initializer to create a new dispatch queue. The following code shows how we would create a new dispatch queue:

let concurrentQueue = DispatchQueue(label: "cqueue", attributes: .concurrent)  
let serialQueue = DispatchQueue(label: "squeue") 

The first line would create a concurrent queue with the label of cqueue, while the second line would create a serial queue with the label of squeue. DispatchQueue initializer takes the following parameters:

  • Label. This is a string label that is attached to the queue to uniquely identify it in debugging tools, such as instruments and crash reports.
  • Attributes. This specifies the type of queue to make. This can be DispatchQueue.Attributes.serial, DispatchQueue.Attributes.concurrent, or nil. If the this parameter is nil, a serial queue is created.

Creating and using a concurrent queue

A concurrent queue will execute the tasks in a FIFO order; however, the tasks will execute concurrently and finish in any order. Let's see how we would create and use a concurrent queue.

let cqueue = DispatchQueue(label: "cqueue", attributes:.concurrent) 

Now, let's see how we would use our concurrent queue by using the performCalculation() function to perform some calculations:

func performCalculation(_ value: Int) {
    _ = 2 + value

cqueue.async {

In the preceding code, we created a closure, which represents our task and simply calls the performCalculation() function. Finally, we use the async(execute:) method of our queue to execute it. This code will execute the task in a concurrent dispatch queue, which is separate from the main thread.

Creating and using a serial queue

A serial queue functions a little different than a concurrent queue. A serial queue will only execute one task at a time and will wait for one task to complete before starting the next one. This queue, like the concurrent dispatch queue, follows the FIFO order. The following line of code will create a serial queue t:

let squeue = DispatchQueue(label: "squeue")  

Now, let's see how we would use our serial queue by using the performCalculation():

squeue.async {

A serial queue only executes one task at a time and the queue waits for each task to complete before starting the next one.

Async versus sync

In the previous examples, we used the async method to execute the code blocks. When we use the async method, the call will not block the current thread. This means that the method returns and the code block is executed asynchronously.

Rather than using the async method, we could use the sync method to execute the code blocks. The sync method will block the current thread, which means it will not return until the execution of the code has completed. Generally, we use the async method, but there are use cases where the sync method is useful. This use case is usually when we have a separate thread and we want that thread to wait for some work to finish.

Executing code on the main queue function

The DispatchQueue.main.async(execute:) function will execute code on the application's main queue. We generally use this function when we want to update our code from another thread or queue.

The main queue is automatically created for the main thread when the application starts. This main queue is a serial queue; therefore, items in this queue are executed one at a time, in the order that they were submitted. We will generally want to avoid using this queue unless we have a need to update the user interface from a background thread.

The following code example shows how we would use this function:

let squeue = DispatchQueue(label: "squeue")  
   let resizedImage = image.resize(to: rect)       
   DispatchQueue.main.async { 
         picView.image = resizedImage 

In the previous code, we assume that we have added a method to the UIImage type that will resize the image. In this code, we create a new serial queue and, in that queue, we resize an image. This is a good example of how to use a dispatch queue because we would not want to resize an image on the main queue since it would freeze the UI while the image is being resized. Once the image is resized, we then need to update a UIImageView with the new image; however, all updates to the UI need to occur on the main thread. Therefore, we will use the DispatchQueue.main.async function to perform the update on the main queue.

Quality Of Service (QoS) and Priorities

It’s quite often necessary to tell the system which tasks of your app are more important than others and which need priority in execution when working with the GCD and dispatch queues. Of course, tasks running on the main thread have always the highest priority, as the main queue also deals with the UI and keeps the app responsive. That information regarding the importance and priority of the tasks is called in GCD Quality of Service (QoS).

The first class means the highest priority, the last one the lowest priority:

  • .userInteractive. This represents tasks that must complete immediately in order to provide a nice user experience. Use it for UI updates, event handling and small workloads that require low latency. The total amount of work done in this class during the execution of your app should be small. This should run on the main thread.
  • .userInitiated. The user initiates these asynchronous tasks from the UI. Use them when the user is waiting for immediate results and for tasks required to continue user interaction. They execute in the high priority global queue. For example, you may need to open a document or read from a local database. If the user clicked a button, this is probably the queue you want. Tasks performed in this queue should take a few seconds or less to complete.
  • .utility. This represents long-running tasks, typically with a user-visible progress indicator. Use it for computations, I/O, networking, continuous data feeds and similar tasks. This class is designed to be energy efficient. This will get mapped into the low priority global queue.
  • .background. This represents tasks that the user is not directly aware of. Use it for prefetching, maintenance, and other tasks that don’t require user interaction and aren’t time-sensitive. This will get mapped into the background priority global queue.
  • .unspecified, .default. There are two other possible choices that exist, but you should not use explicitly. There’s a .default option, which falls between .userInitiated and .utility and is the default value of the qos argument. It’s not intended for you to directly use. The second option is .unspecified, and exists to support legacy APIs that may opt the thread out of a quality of service. It’s good to know they exist, but if you’re using them, you’re almost certainly doing something wrong.
let queue = DispatchQueue(label: "queue", qos: .utility)
queue.async { [weak self] in 
    guard let self = self else { return }

    for i in 0..<10 {

    DispatchQueue.main.async {
        self.textLabel.text = "Done!" 

Using asyncAfter

The asyncAfter function will execute a block of code asynchronously after a given delay. This is very useful when we need to pause the execution of our code. The following code sample shows how we would use the asyncAfter function:

let queue = DispatchQueue(label: "squeue")  
let delayInSeconds = 2.0 
let pTime = + Double(delayInSeconds * Double(NSEC_PER_SEC)) / Double(NSEC_PER_SEC)  
queue.asyncAfter(deadline: pTime) { 
   print("Times Up") 

In this code, we begin by creating a serial dispatch queue. We then create an instance of the DispatchTime type and calculate the time to execute the block of code based on the current time. We then use the asyncAfter function to execute the code block after the delay.

Using Dispatch Groups

Dispatch group lets each function run independent of the other, which can improve performance since the methods are operating concurrently rather than sequentially. Finally, we can also use dispatch_group_notify() to specify an additional closure that will run only when all the other closures in the group have completed running.

To create a dispatch group, we just need to create a DispatchGroup object like this:

let group = DispatchGroup()

Then we run each queue inside this dispatch group like this:

func calculateFirstResult(_ data: String) -> String { 
    Thread.sleep(forTimeInterval: 3)
    let message = "Number of chars: \(String(data).count)" return message
func calculateSecondResult(_ data: String) -> String { 
    Thread.sleep(forTimeInterval: 4)
    return data.replacingOccurrences(of: "E", with: "e")

queue.async(group: group) {
    firstResult = self.calculateFirstResult(processedData)
queue.async(group: group) {
    secondResult = self.calculateSecondResult(processedData) 

To run a final closure after all other closures have finished, we create a group.notify queue like this:

group.notify(queue: queue) {
    let resultsSummary = "First: [\(firstResult!)]\nSecond: [\ (secondResult!)]"
    DispatchQueue.main.async {
        self.resultsTextView.text = resultsSummary
    let endTime = Date()
    print("Completed in \(endTime.timeIntervalSince(startTime)) seconds")

One final difference is that the group.notify and the queue.async queues need to access the firstResult and secondResult variables, so we need to declare them outside of both queues like this:

var firstResult: String! 
var secondResult: String!

If, for some reason, you can’t respond asynchronously to the group’s completion notification, then you can instead use the wait method on the dispatch group. This is a synchronous method that will block the current queue until all the jobs have finished. It takes an optional parameter which specifies how long to wait for the tasks to complete. If not specified then there is an infinite wait time:

let group = DispatchGroup()
someQueue.async(group: group) { ... } 
someQueue.async(group: group) { ... } 
someOtherQueue.async(group: group) { ... }

if group.wait(timeout: .now() + 60) == .timedOut {
    print("The jobs didn’t finish in 60 seconds") 


GCD is great for common tasks that need to be run a single time in the background. When you find yourself building functionality that should be reusable — such as image editing operations — you will likely want to encapsulate that functionality into a class. By subclassing Operation, you can accomplish that goal.

Operations are fully-functional classes that can be submitted to an OperationQueue, just like you'd submit a closure of work to a DispatchQueue for GCD. Because they’re classes and can contain variables, you gain the ability to know what state the operation is in at any given point.

Operations can exist in any of the following states:

  • isReady
  • isExecuting
  • isCancelled
  • isFinished

Additional reading