Keeping Your Interface Responsive

[The following is an excerpt from More iPhone 3 Development: Tackling iPhone SDK 3. For more information about the book, visit apress.com/book/view/143022505X.]


As a general rule, you do not want your application's user interface to ever become unresponsive. When you're developing iPhone applications, if you try to do too much at one time in an action, delegate method, or in a method called from one of those methods, your application's interface can skip or even freeze while the long-running method does its job. Your user will expect to be able to interact with your application at all times, or at the very least, will expect to be kept updated by your user interface when they aren't allowed to interact with it.


In computer programming, the ability to have multiple sets of operations happening at the same time is generally referred to as "concurrency."


iPhone Life
Discover your iPhone's hidden features
Get a daily tip (with screenshots and clear instructions) so you can master your iPhone in just one minute a day.

In this excerpt, we're going to look at some more general-purpose solutions for adding concurrency to your application. These will allow your user interface to stay responsive even when your application is performing long-running tasks. Although there are many ways to add concurrency to an application, we're going to look at just two. These two, combined with what you already know about run loop scheduling for networking, should allow you to accommodate just about any long-running task.


The first mechanism we're going to look at is the timer. Timers are objects that can be scheduled with the run loop. Timers can call methods on specific objects at set intervals. You can set a timer to call a method on one of your controller classes, for example, ten times per second. Once you kick it off, approximately every tenth of a second, your method will fire until you tell the timer to stop.


Neither run loop scheduling nor timers are what some people would consider "true" forms of concurrency. In both cases, the application's main run loop will check for certain conditions, and if those conditions are met, it will call out to a specific method on a specific object. If the method that gets called runs for too long, however, your interface will still become unresponsive. Working with run loops and timers is considerably less complex than implementing what we might call "true" concurrency, which is to have multiple tasks (and multiple run loops) functioning at the same time.


The other mechanism we're going to look at is relatively new in the Objective-C world. It's called an "operation queue," and it works together with special objects you create called "operations." The operation queue can manage multiple operations at the same time, and it makes sure that those operations get processing time based on some simple rules that you set down. Each operation has a specific set of commands that take the form of a method you write, and the operation queue will make sure that each operation's method gets run in such a way as to make good use of the available system resources.


Operation queues are really nice because they are a high-level abstraction and hide the nitty-gritty implementation details involved with implementing true concurrency. On the iPhone, queues leverage an operating system feature called "threads" to give processing time to the various operations they manage. Apple is currently recommending the use of operation queues rather than threads, not only because operation queues are easier to use, but also because they give your application other advantages.


If you're at all familiar with Mac OS X Snow Leopard, you've probably heard of Grand Central Dispatch (GCD), which is a technology that allows applications to take greater advantage of the fact that modern computers have multiple processing cores and sometimes multiple processors. If you used an operation queue in a Mac program back before GCD was released, when you re-compiled your application for Snow Leopard, your code automatically received the benefit of GCD for free. If you had used another form of concurrency such as threads, your application would not have automatically benefitted from GCD. We don't know what the future holds for the iPhone SDK, but we are likely to continue to see faster processors and possibly even multiple core processors. By using operation queues for your concurrency needs, you will essentially future-proof your applications. If Grand Central Dispatch comes to the iPhone in a future release of the iPhone SDK, for example, you will be able to leverage that functionality with little or no work. If Apple creates some other nifty new technology specifically for handling concurrency in a mobile application, your application will be able to take advantage of that.


You can probably see why we're limiting our discussion of "true" concurrency to operation queues. They are clearly the way of the future for both Cocoa and Cocoa Touch. They make our lives as programmers considerably easier, and they help us take advantage of technologies that haven't even been written yet. What could be better?


Timers


In the Foundation framework shared by Cocoa and Cocoa Touch, there's a class called NSTimer that you can use to call methods on a specific object at periodic intervals. Timers are created, and then scheduled with a run loop. Once a timer is scheduled, it will fire after a specified interval. If the timer is set to repeat, it will continue to call its target method repeatedly each time the specified interval elapses.


Timers are not guaranteed to fire exactly at the specified interval. Because of the way the run loop functions, there's no way to guarantee the exact moment when a timer will fire. The timer will fire on the first pass through the run loop that happens after the specified amount of time has elapsed. That means a timer will never fire before the specified interval, but it may fire after. Usually, the actual interval is only milliseconds longer than the one specified, but you can't rely on that being the case. If a long-running method runs on the main loop, like the one in Stalled, then the run loop won't get to fire the scheduled timers until that long-running method has finished, potentially a long time after the requested interval.


Timers fire on the thread whose run loop they are scheduled into. In most situations, unless you specifically intend to do otherwise, your timers will get created on the main thread, and the methods that they fire will also execute on the main thread. This means that you have to follow the same rules as with action methods. If you try to do too much in a method that is called by a timer, you will stall your user interface. As a result, if you want to use timers as a mechanism for keeping your user interface responsive, you need to break your work down into smaller chunks.


Threads


Every application has at least one thread, which is a sequence of instructions. The thread that begins executing when the program is launched is called the main thread. In the case of a Cocoa Touch application, the main thread contains the application's main run loop, which is responsible for handling inputs and updating the user interface. Although there are some instances where Cocoa Touch uses additional threads implicitly, pretty much all application code that you will write will fire on the main thread, unless you specifically spawn a thread or use an operation in an operation queue.


To implement concurrency additional threads are spawned, each tasked to perform a specific set of instructions. Each thread has equal access to all of your application's memory. This means that any object except local variables, can potentially be modified, used, and changed in any thread. Generally speaking, there's no way to predict how long a thread will run, and if there are multiple threads, there's no way to predict, with any certainty, which thread will finish first.

These two thread traits—the fact that they all share access to the same memory, and that there's no way to predict what share of the processing time each will get—are the root cause of a number of problems that come along for the ride when doing concurrent programming. Operation queues provide some relief from the timing problem, since you can set priorities and dependencies, which we'll look at a little later, but the memory sharing issue is still very much a concern.


Operation Queues


We're going to look at operation queues in a moment, but before we do that, we need to talk about operations. Operations are the objects that contain the sets of instructions that the operation queue manages. They usually take the form of custom subclasses of NSOperation. You write the subclass and put the code that needs to be run concurrently in it.


When implementing an operation for use in an operation queue, there are a few steps you need to take. First, you create a subclass of NSOperation and define any properties that you'll need as inputs or outputs from the operation.. Next, you will have to override the method called main, which is where you put the code that makes up the operation. There are a couple of things you need to do in your main method. The first thing you need to do is wrap all of your logic in a @try block so you can catch any exceptions. It's very important that an operation's main method not throw any exceptions. They must be caught and handled without being re-thrown. An uncaught exception in an operation will result in a fatal application crash.


You will also have to create a new autorelease pool. Different threads cannot share the same autorelease pool. The operation will be running in a separate thread so it can't use the main thread's autorelease pool, so it's important to allocate a new one.


Now you know how to create operations, so let's look at the object that manages operations, NSOperationQueue. Operation queues are created like any other object. You allocate and initialize the queue like so: NSOperationQueue *queue = [[NSOperationQueue alloc] init];


At this point, the queue is ready to use. You can start adding operations to it immediately without doing anything else. Adding operations is accomplished by using the addOperation: method, like so: [queue addOperation:newOp];


Once the operation is added to the queue, it will execute as soon as there is a thread available for it and it is ready to execute. It can even start executing operations while you're still adding other operations. Operation queues, by default, set the number of threads based on the hardware available.

Issue: 
Sept/Oct 2010
Department/Section: 
Creating Apps
TOC Weight: 
91