This is how you can make your iPhone 4 into a much more versatile video camera

(Note that this article, as with most of my stuff, shows you not only how the camera can be reconfigured, but also shows you - how you can play back a YouTube video frame-by-frame; - how you can create and upload packages to your Cydia [an alternative to Apple's AppStore] repository and, last but definitely not least, gives you a full source-level explanation of the entire app that I've written to quickly switch between the different iPhone4 camera configurations. No one has released anything similar for the iPhone 4 or the iPhone 3GS [for which I'll also release a very similar utility in a few days] before!)

I love using both video and stills cameras and, consequently, have warmly welcome the comparatively good one in the iPhone 4. However, its problems quickly become apparent and really annoying:

- compared to the stills (that is, the optics and the CCD itself), in video mode, it uses a far narrower (about 40 mm equiv.) field of view

- in bad light, the videos become pretty noisy because it, in order to be as quick as possible, doesn't use any kind of pixel binning or dynamic downsizing

- you can't alter its default 13 Mbps bit rate, which can be quite high for many usage area.

Let's start with the first, the question of field of view, which is about, according to my measurements, 40 mm in 36 mm terms, while, with plain still photos, it's around 32 mm, that is, (somewhat, but not very) wideangle. The simple reason for this is that the iPhone 4, while recording 720p video, only uses the innermost 1280*720 pixels of its CCD and not the entire surface of the CCD, which is more than double the size in both directions; that is, 2592*1936.
What happens if you crop? The field of view gets (much) narrower – as is the case with the iPhone4 (and, for that matter, many well-known cameras – e.g., the Panasonic TZ18/ZS8, which, while it has a 24mm lens, only records 720p videos at the equivalent of 28mm.)

What's the solution? Of course, using the entire surface of the CCD. How can you achieve this? By editing the LiveSourceOptions/Sensor key in /System/Library/PrivateFrameworks/Celestial.framework/N90/AVCapture.plist/AVCaptureMode_AudioVideoRecording (before 4.3) or /System/Library/Frameworks/AVFoudation.framework/N90/AVCaptureSession.plist/AVCaptureDevices[0]/AVCaptureSessionPresetHigh (in all 4.3.x iOS versions but not in the currently available iOS5b1/b2).

Using the entire surface of the CCD

The above key (LiveSourceOptions/Sensor) has two subkeys, Width and Height, which tell the iPhone the crop to be used when taking video. The default 1280 / 720 tells the system to use only the center of the CCD. By raising these values to the full width / height of the CCD, that is, 2592 and 1936, results in using the entire area of the sensor, dynamically resizing every frame to the target 720p video stream. (The size of the latter, unfortunately, can't be increased to, say, true 1080p size; that is, 1920*1080).

What will this mean, though? A lot of advantages – and, unfortunately, disadvantages. Let's start with the former.

Advantages of making use of the entire sensor surface

The biggest advantage is, as has already been pointed out, is the significantly increased field-of-view. In layman's terms, you can take a shot of a lot more from the same position, even things that, otherwise, would remain outside the viewing area of the camera when using the non-enhanced, original camera configuration.

Pixel binning (that is, combining several original pixel cells into one pixel in the output video stream) also results in significantly reduced video noise. This won't be noticeable when shooting in good light; indoors, however, the difference is significant.

Disadvantages of making use of the entire sensor surface

Pixel binning means heavy runtime overhead. The effective frame-per-sec speed will suffer greatly. While the iPhone4 camera using the original configuration is able to shoot at steady 30 fps (frame-per-second), pixel binning will result in effectively halving this speed. To see the difference, I've created several demo videos; the first two of them is as follows:








(iPhone4 using the original configuration; direct link)








(iPhone4 using pixel binning; direct link)

The video I've taken a shot is a 60fps one and is played back on my MBP with true 60 fps. This means a 30 fps recording will display roughly every second, a 15 fps one every fourth number. To quickly check out the output of the two configurations without having to save the YouTube flash videos and playing it back in the built-in Mac OS X QuickTime (which is able to play back videos frame-by-frame by continuously pressing the Right cursor key), you will want to first download the Greasemonkey Firefox add-on and, after that, clicking THIS link (linked from HERE), which adds a lot of control capabilities to YouTube videos. (Make sure you just accept everything as default and at least temporarily disable any kind of flash blocker when using YouTube Enhancer. Otherwise, the new GUI buttons won't be shown and you won't be able to examine the video frame-by-frame. Unfortunately, neither the current Flash nor the HTML5 interface is able to step through frames as is also explained HERE)

For example, if you step through the non-enhanced video recording, you'll see the following series of frames recorded: 80-83-84-86- 88-90-92-94-96-99-101-102- 104-106-108-110- 112-114. This is indeed almost exactly 30 fps. The enhanced (pixel-binned) version, on the other hand, shows the following frames: 1-5-10-13-17-20-26-30-32-35-40-45-48-53-57-63-66-68-73-77-80-86-90-93-96-100-106-108-113-118. This, if you divide 118 (the number we've got to) by 30 (the number of frames I've listed), we get almost exactly 15 fps.

This means you won't want to record quick movements with the camera enhancements. For rather static family meetings, where there is little movement, it's, however, optimal.

Other enhancement tips: TemporalNoiseReductionMode

Some iPhone 3GS users recommended not using noise reduction (LiveSourceOptions/TemporalNoiseReductionMode = 0 instead of the default 1) at all. In my test, the iPhone 4 turned out not to get any faster when I disabled inter-frame noise reduction – the CPU/hardware is too fast, and it's really the reading speed of the CCD that counts.

The test video showing recording with noise reduction disabled is as follows (direct link):



The numbers shown are as follows: 1-4-7-12-13-18-23- 27-30-36-38-42-46-53-56-58-62-66-70-73-78- 83-86-90-96-98-103- 106-110-116-120. That is, almost exactly 15 fps, just like in the noise-reduced case.

The original 720p configuration also remains 30 fps, which you can also check out using my dedicated test shoot (original here):







iPhone Life
Discover your iPhone's hidden features
Get a daily tip (with screenshots and clear instructions) so you can master your iPhone in just one minute a day.


Showing the real difference in field-of-view

The above original-vs.-enhanced video shots have been taken from approximately the same position and already show the enhanced version has a much wider viewing angle. Let's take some static video frame grabs with far better accuracy (controlled camera position), also compared to the dedicated stills shots and some other cameras and smartphones (click the images for the full version!):


(iPhone4 enhanced full video frame grab)


(iPhone4 non-enhanced full video frame grab)


(iPhone4 still shot)

Take a closer look at the first two shots. The first shows a lot more than the second, doesn't it? Yes, the difference in the field-of-view of the non-enhanced (default) and the enhanced is pretty big. The latter should also be compared to the (third) still shot – as you can see, the field-of-view of the two images are the same, meaning they both use the same CCD surface – as opposed to the default video mode.

Incidentally, let's compare these to some other smartphones, cameras and other gadgets! Let's start with the Nokia N95 (running the latest firmware version), which, at its heyday (2007), was the phone with the best-quality camera:


(Nokia N95 video frame grab)


(Nokia N95 photo)

As you can see, the video mode on the Nokia N95 suffered from the same problem as the iPhone4 without using the enhancement: it uses only the center of its CCD, meaning far narrower field-of-view. In photo (and enhanced video) mode, the iPhone4 has somewhat wider field-of-view than the N95 in photo mode. Noise levels are approximately the same; low-light color saturation is better on the iPhone.

Let's move on to one of the most popular digital travel zoom cameras of all time, the Panasonic ZS-3 (a.k.a. TZ-7). In both photo and video mode, it produces far wider field-of-view than any of the above (iPhone 4 / Nokia N95) cameras:


(ZS3 photo)


(ZS3 video)

Incidentally, it's worth noting that the low-light video quality of the ZS3 is far inferior to the iPhone4 (regardless of the latter running in enhanced or non-enhanced mode): the “watercolor” effect clearly turn up its ugly head when there's little light and, therefore, you must switch to iA mode to enable low-light mode to make sure the recording isn't too dark. (The ZS3, just like all pocket travel zooms, has a rather slow lens – the largest aperture is 3.3 at 25mm – and, paired with the small sensor and more than 1/30 shutter used in video, this all results in light insensitivity under low light conditions.) This means the effective resolution is far lower than even that of the iPhone4 in enhanced mode.

Incidentally, speaking of the Pana travel zooms, they have a much worse-quality AVCHD encoder than the iPhone4 (or, for that matter, the Nikon P300, one of the best quality budget advanced point and shoot cameras today, which doesn't show any kind of panning mud, not even at the non-default lower-speed 12 Mbps setting). An example frame grab of a typical “muddy” 9 Mbps ZS3 stream (the “mud” is apparent even at the highest, 18 Mbps data rate, albeit at a considerably less frequency):



This means you can safely quick-pan the iPhone4 at the default setting (13 Mbps 720p), you won't run into heavy and really ugly mud so charasteristic of Panasonic's (older) AVCHD encoders – and, according to my tests, even at lower settings.

Note that “muddiness” more to do with the quality of the (runtime) AVCHD encoder than with the data rate. (Unhacked) Panasonic Lumix GH-1 users have always suffered from mud in 1080p (but not in 720p) because the AVCHD encoder in the GH-1 is pretty weak. (By heavily increasing data rate though the GH-1 hack, the mud was reduced substantially, but it's still there if you do pan the camera quickly.) As is in the ZS3/TZ7 of the same manufacturer, as has already been pointed out.

Low-light superiority

I've already mentioned pixel-binning means considerably better low-light performance. Let's take a look at this, comparing the low-light behaviour of the enhanced mode to that of the original one. Let's start with the former:



and the original one:



Wow! There is a difference! The enhanced version is much brighter and has way less color noise (check the latter out mainly on the white surfaces)!

Let's compare all this to the Nikon P300 with its lovely f1.8 lens!



(The shot was taken utilizing the superbright f1.8 lens at 24mm (hence the considerably wider field-of-view, even when compared to the around 30-32mm enhanced iPhone version).)

As you can see, the enhanced version of the iPhone4 delivers at least as good footage, light sensitivity-wise, than the (sensitivity-wise, already quite good) Nikon P300. (It's even better in that it has way less color smearing; compare, for example, the smearing of the red color in the Parallels Quick reference guide's title! The iPhone4's rendition is far superior and less marred by smearing.) The non-enhanced iPhone4 is way behind.

Incidentally, technology does improve! I remember having to use 500W (!) bulbs put almost in the face of my parents to get tolerably well lit footage on my 15 DIN 8 mm color movies back in the eighties. (When shooting with 21 DIN b&w material, “only” a 200W bulb was sufficient, lighting my subjects from 2-3 metres). Video cameras of the 70's-early eighties weren't much better either. Did you know that probably the first “portable”, battery-operated, prosumer color video camera recording system released in 1975, the Akai VT-150, requires 600 lux minimum? That's a 500W halogen bulb 2 metres from the subject! (More info on this wonderful vintage recorder HERE; the full manual of the black-and-white predecessor is HERE. I love this kind of old stuff...) And now, these cameras can take pretty good video in almost as little as a light of a candle... Orders of magnitude better light sensitivity than back in the 70s and 80s, both 8mm film- and video camera-wise!

Disadvantage

As has already been pointed out, the enhanced mode is not recommended if you want to shoot fast-moving subjects or quickly pan the camera. Also, the horizontal resolution gets a considerable hit – it “only” records 1024 pixels instead of 1280. Nevertheless, if you do use it as is most recommended (situations where the low-light superiority and way wider field-of-view is a considerable advantage)

Reducing the bit rate

So far, I've only explained the differences in field-of-view and low-light sensitivity. I've, however, made another parameter easily-settable: the encoding data rate. Why you may want to play with it?

Assume you're like me; that is, love recording everything that happens to you or your beloved ones onto video. Then, you'll want a camera that does this automatically, without having to restart recording every, say, half an hour and/or freeing up the built-in memory every, say, 3-4 hour.

Without reducing the bit rate of the iPhone 4, you can only record about 50 minutes without manual intervention (that is, restarting the recording). Also, even if you have a 32 Gbyte model without any additional apps/media, you can only record about 6 hours of video before having to transfer the entire stuff to your PC or Mac.

If you record a subject that doesn't change much (e.g., the above-mentioned family meeting with family members at least 3-4 metres from the phone), you can drastically(!) lower the data rate of the recording. It will solve both problems explained above: no 50-minute limit and no need to free up the phone every 3-6 hour. Actually, if you use a recording data rate of 1/16th of the original 13 Mbps, you can record even ten hours(!) directly, without any kind of intervention, preferably connected to the mains.

BTW, video light. Many have been afraid of using the LED video light for more than, say, 10 minutes. I've tested this pretty thoroughly and can report: you can safely leave it on for extended periods of time, even 2-3 hours. (At least after 3 hours of testing – taking video at a reduced data rate with the video light on, connected to mains – my iPhone4, which is in a Griffin case, didn't fry.) Of course, the single LED in the device won't help much illuminating your subject unless it's really close (say, 1.5 metres at most) to your phone, not even with the “wide” hack, which delivers inherently better low-light performance and sensitivity than the default. In these cases, you might want to consider a battery-operated external light. I, for example, have been using the HDV-Z96 LED Photo/Video Light Kit, which I acquired for US$ 75 and works just great, even with less light sensitive cameras like the Pana ZS3, let alone the, in this regard, better ones like the (enhanced) iPhone 4 or the Nikon P300 (or similar P&S cameras with a very bright lens).

Back to the subject of data rate. If your subject doesn't move / change much, you can reduce the data rate. It's just one setting in the dedicated plist field. You can go as low as even one-32th of the original data rate if the subject is (fairly) well-lit (to avoid sensor noise, which is one of the biggest enemies of low data rates and, therefore, high compression) and changes / moves little.

Changing the data rate is completely independent of changing the field-of-view. This means if the current (narrow) field-of-view and the (lowish) light sensitivity are sufficient (or even preferable, see e.g. the lower frame rate of the “wide” mode) for you, you won't want to change the field-of-view, but may still want to change the data rate to reduce the storage usage and/or increase the recording time without manual restarts.








(direct link)

Note: the more the movement and less the data rate, the worse the visual quality and the more the compression artifacts. Compare, for example, the first video (up at the beginning of the article) to the one above. The former is shot using full data rate; the latter with 1/32th of it. As you can see, there're a lot of artifacts (“squares”) in the second. Nevertheless, this is an extreme example you won't see the results when actively using data rate reduction:

1.) you won't want to use greatly reduced data rates with shooting with a non-steady iPhone (while you keep it in your hand) and with your subjects so close to the camera.

2.) With the iPhone placed on a steady surface and the subjects e.g. sitting in a safe distance (and/or moving little), the results will be almost as good as with recording with full data rate. And you save 15 times the storage... an example shot of this setting (note the subjects being far away from the camera and the scene being pretty well lit to avoid heavy, visible artifacting because of the video noise):



Note that moving mist, water surface etc. requires high data rates. Look at the following framegrab, where the actual subjects (me and my relatives) are situated quite far from the camera, but a non-minor part of the scene is occupied by the fuming fire. The results are disastrous (“squares” all around, which would be almost nowhere without the smudge) and are pretty similar when you set your iPhone to shoot you while you're playing / swimming in the water:



It was indeed a big mistake by me not to remember to switch back to normal data rate.

My setter program

Now that you know when to use the wide angle mode and when to decrease the data rate (and when avoid them at any rate), let me present my application that does all this. First, I present you with a user-level overview and, then, also explain to fellow iOS (would-be) programmers how it works.

First and foremost, you need to jailbreak your iPhone 4. This is possible with all 4.x iOS versions in untethered mode. (Even with the currently available two iOS5 betas but my program itself won't work under these OS versions because of the reworked camera engine.) Jailbreaking is legal in the States (see THIS, THIS and THIS for more info) and many other countries, e.g., the EU. With 4.x versions, all you'll need to do is downloading THIS (Mac) or THIS (Windows) apps, run it and follow the on-screen instructions. (Note that these are for iOS 4.3.3. In the future, the links may change; if, by the time you read this, there's another 4.x version, please check THIS for updated redsn0w download links.)

During jailbreaking, make sure you install Cydia. After the jailbreak is ready, start Cydia and select any of the three possibilities (Developer will be just fine). Then, add “http://www.winmobiletech.com/cy” or “http://winmobiletech.com/cy” as a Cydia repository source (currently, it's the repository that has my package) and, inside it, select the iPhone 4 camera enhancer:



Install it and start after making sure you've killed the Camera app. If you don't do the latter, the changes won't be visible in the app.

After starting, you'll be greeted by the following screen:



Here, you only have two columns in the picker to (independently) select your field-of-view and data rate. Feel free to make a selection. For example, if you want to set the field-of-view to wide and the data rate to 1/16th of the original, make the following changes:



Now, tap the Go button to commit the changes.

There can be two cases when you're shown an error message right after starting the app: you're trying to run the app under iOS5.x or have changed the default password from “alpine” to something else. In the first case, either downgrade to iOS 4.3.3 (during beta, this can freely be done via iTunes without using any “hacks” like TinyUmbrella); in the second, just change the password back. (After running the app once, you can change the password again – it only needs to set write permissions for the system-level camera plist file once; subsequent write accesses will already be allowed, independent of the actual root password.)

If you want to fine-tune more parameters (not only the data rate and the one-step switch between wide and near field-of-view), you can do this under the second tab, “Advanced view”. In general, I don't think you'll ever need this; however, I still provide you with it, should you really want to play with these parameters. At first run, it presents a default settings list without a checkmark:



Each element of a list corresponds to a given configuration setting. For example, if you want to use (or fine-tune) the parameters of, say, “wide” view angle (corresponding to XGA output video resolution) and quarter data rate, select the “XGA – 1/4 bit rate” option.

In the detailed view, make your changes (if any) and, then, tap the “Save” button in the upper right corner. You'll be shown an acknowledgement dialog telling you the changes are written back to the system:



After this, in the main selection list, the just-edited (or just saved) entry will have a checkmark next to it (after restarting the app too):



Finally, the last tab is just a quick explanation:





Now you can stop reading (and start experimenting with my tool and/or taking videos) if you aren't a programmer and/or aren't at all interested in programming or creating Cydia packages and repositories. If you are, read on – I'll show you a lot of secrets!

First, let's start with the easiest part: Cydia package generation and upload.

Uploading to Cydia

Basically, you will just want to follow THIS nice tutorial.

There may be three issues:
- Fink may refuse to compile. Interestingly, I got mixed results. Once, it did install nicely on my 10.6.6 machine. Another time, it refused to do so on a freshly set-up 10.6.7 one. CMake was already present on the latter; I don't know whether it caused the problem. I didn't spend much time on this problem as I could finely install the binary distribution (there're binary distros for 10.5 but not for 10.6) on my iDeneb 10.5.x running on my IBM Thinkpad T42 and I was able to create the .deb file there. (Sometimes, it's great to have access to old OS X versions or even, otherwise, much-much inferior hardware around running old versions of Mac OS X...)

- MD5 hash generation doesn't work as is explained in the tutorial. Fortunately, there's an even easier way of doing this: just use the md5sum utility built-in OS X (md5sum name.deb) in Terminal.

- if you can't decompress ZIP files remotely on a Web server (that is, your would-be Cydia repository), you can also upload the package contents already decompressed.

Programming

So, how does this all work? First of all, this is a tab-based application. The first and the third views are pretty simple: the first only contains a slider and a button and the third a UIWebView for presenting styled help. As UIWebView can't be initialized from Interface Builder to have a default HTML content, I also needed to create a (very simple) View Controller (VC for short) for it to initialize the component with the help text. (By sticking with a simpler, not-styled text viewer component, I could have avoided this. Those, however, don't allow for formatting text or even inserting newlines; this is why I needed to go with this otherwise complicated solution.)

I assume you already know what picker and list delegates are and what are the invocation / return patterns of table view components (this is basic stuff explained by every single iOS programming textbook); therefore, I “only” discuss questions pertaining to the actual question of making the system file writable (it's much more complicated – and, of course, very sparsely documented! - than you may think) and reusing the same setter code between the Simple and Advanced view.

First of all, HERE is the complete source code(!!) package.

First, let's take a look at the app delegate. As I've built the entire project based on as a tab bar controller as the outermost VC, you'll know at once why the first assignment is “self.window.rootViewController = self.tabBarController;” After the usual root view displaying ([self.window makeKeyAndVisible];), two not very widely discussed subjects follow:

- getting the OS version and
- using the system() function, together with 'su', to change the permissions of a file so that the end user doesn't have to do this from, say, CyberDuck.

First, getting the iOS version. This is very important, as iOS 4.3 has completely changed the structure (and location) of the camera configuration plist. It's comparatively easy: [[UIDevice currentDevice] systemVersion]. This returns an NSString; in order to avoid “less”/”greater” string comparisons (fortunately, these are better supported in iOS than in, say, Java; see, for example, if ([[[UIDevice currentDevice] systemVersion] compare:@"4.3" options:NSNumericSearch] != NSOrderedAscending)) and to make the code as clean as possible, I've gone the floatValue way:

BOOL systemVersionHigherThan43 = NO;
if ([[[UIDevice currentDevice] systemVersion] floatValue] > 4.3)
systemVersionHigherThan43 = YES;


Now, the second problem: granting write access rights for everyone to either /System/Library/Frameworks/AVFoundation.framework/N90/AVCaptureSession.plist or /System/Library/PrivateFrameworks/Celestial.framework/N90/AVCapture.plist, depending on the OS version. You must use the above-mentioned system() call here. It's a bit convoluted:

system("echo alpine | su -c 'chmod a+w /System/Library/Frameworks/AVFoundation.framework/N90/AVCaptureSession.plist' root");

What does this do? Invokes the su command, using the -c flag used to pass the actual chmod command to execute. To pass the (default iOS) system password to su, we use echo alpine at the beginning.

Why do we change the file access rights from the app? Easy: without these instructions, my users would need to use an external tool; for example, iFile available right from Cydia (note that the trial version is also able to change file permissions) or the above-mentioned CyberDuck running on the desktop. (The latter also requires the install of OpenSSH on the device.) All in all, rather complicated and error-prone stuff – inexperienced users can mess up the entire system, even with iFile, let alone the OpenSSH+ CyberDuck combo.

Should you need to change permissions of a system file, use the above instead of forcing the user to download and use iFile (or, even worse, a desktop utility also requiring the local install of OpenSSH).

Incidentally, some words on accessing files and databases not visible from even projects deployed from Xcode. (This has nothing to do with the actual subject of the article, but you may have faced / will run into this problem nonetheless so I elaborate on it as I've spent quite some time on gaining programmatical access to these files and databases back in the iOS 3.x days.) There are some files you just can't access for reading/writing from even Xcode on a jailbroken machine, not even after (even recursively) changing the permissions to allow for global access. Some examples of these files (databases) are the SMS database (/var/mobile/Library/SMS/SMS.db) or, in pre-iOS4 devices (in iOS4+, there already is Event Kit, which allows easy predicate-based access; see THIS if interested), the Calendar database (/var/mobile/Library/Calendar/Calendar.sqlitedb). These databases are all accessible from any app installed under /Applications (as opposed to the AppStore's / Xcode's /var/mobile/Applications). By default, all apps downloaded from Cydia install itself under /Applications and, therefore, have access to these files.

Let's go on. If the file remains non-writable (or plain non-accessible; for example, because it's not there, which is the case in the current two betas of iOS5 or the user has changed the default “alpine” password), we display an error message telling the user about these and asking him to terminate the program (remember: Apple doesn't like programs terminating themselves. Of course this isn't an AppStore app so we may be able to use exit(), nonetheless).

First tab

Otherwise, we just go on displaying the GUI. The default (first tab) is the Simple view. Its accordingly named VC, FirstViewController, creates an instance of SystemPlistContentWrapper right as the first instruction in viewDidLoad.

This SystemPlistContentWrapper is, as you may have guessed based on its name, a wrapper class for the contents of the system plist that needs to be edited. I had to separate it from the VC as there are two VC's both reading in and, then, writing back (probably modified) the full content of the given plist file and I needed to avoid hard-to-manage code duplication.

Reading the entire content of the plist file is done in populateSystemPlistDictionary and is very important as we only modify few of the values of the plist, which contains tons of additional keys. This is why we need to save its entire contents to an in-memory NSMutableDictionary (aVCaptureSessionTempDict), from which, in the next step, extract the value with the key “AVCaptureDevices” (in all iOS versions): self.arrayOfHardwareCameraItems = [aVCaptureSessionTempDict objectForKey:@"AVCaptureDevices"].

After this, if we're in iOS4.3+, we can be sure it's an array. As the back camera will be the first element, we get it by just a plain [self.arrayOfHardwareCameraItems objectAtIndex:0]. This is a traditional dictionary, of which we only need to modify AVCaptureSessionPresetHigh - the dictionary that contains values (and other dictionaries) used when shooting video using the built-in Camera application.

Under 4.0...4.2, we first save aVCaptureSessionTempDict to currentAVCapture as it already has dictionaries that need to be saved and also get the dictionary used for back-facing camera recording with Camera (AVCaptureMode_AudioVideoRecording).

After this, the reading process is the same for the two iOS versions as it's only until this depth that there were any differences between iOS 4.0...4.2 and 4.3. We copy the current value of all the values that the user or we may change in the future, based on the user input to long-living instance variables.

While still in this class representing the plist file, let's take a look at the opposite of the just-discussed populateSystemPlistDictionary, writeDataToSystemFile. It's in charge of saving the (probably modified) instance variables, with some of their dictionaries possibly updated based on the values of the method's actual parameters, to the plist in the file system. The vast majority of the code is the same for the two iOS versions; the real difference is in, as with reading the plist in the previous method, is in updating the topmost data structures. It's the exact opposite of reading them back and should pose no problems if you've already understood how populateSystemPlistDictionary works.

Back to the picker tab (the first one): after creating and populating a SystemPlistContentWrapper instance in viewDidLoad, we read (int dataRate = [(NSNumber*)[self.mySystemPlistContentWrapper.currentAVCaptureSessionPresetHigh_VideoCompressionProperties objectForKey:@"AverageDataRate"] intValue]) the current value of “AverageDataRate” from the already-set SystemPlistContentWrapper instance and, based on this value, scroll the right picker to the row corresponding to this value. (Unless the user entered something non-dividable in Advanced view). We do the same with the left column; here, we examine the value of LiveSourceOptions/Sensor/Width to decide whether we use the entire surface (all pixels) of the CCD, or just the traditional 720p ones. We set up the two pickers' labels and scroll to the selection.

Second tab

The second tab (Advanced view; class: RootViewController) has a plist of its own, which stores all the freely modifiable user configurations. As the user is allowed to set any of the fields (even the names of the list elements), we store all our data in this file. By default, it has 10 elements (see #define NR_OF_ENTRIES 10) and, in addition, the index of the last selected element, self.lastSavedListIndex. The first elements are arrays, the last one is a plain NSNumber.

In viewDidLoad, we read this local plist file and populate an MyDataWrapper instance for each of them. This class has nothing to do with SystemPlistContentWrapper: the latter is in charge of backing up (reading) and restoring (writing back) the contents of the system plist file, while the MyDataWrapper instances in RootViewController purely contain the in-memory representation of the private configuration file used solely by RootViewController.

If the configuration file still doesn't exist (because we've never started the app or never switched to the second tap), we call the init constructors of MyDataWrapper. There are several of them, corresponding to the different configurations listed in the main list view (XGA vs. 720p, different data rates, fps values etc.) Finally, at the end of viewDidLoad, we make sure we do back up (overwrite) this private file when the user exits the configuration file. We register both UIApplicationWillTerminateNotification and UIApplicationWillResignActiveNotification; upon receiving both kinds of notification . Actually, as we've declared the project not to run in the background (see “Application does not run in background” in iP4VCamEnhancer-Info.plist), we don't strictly need UIApplicationWillResignActiveNotification. Nevertheless, it's good practice to keep the registration code for both notifications ready to copy in your projects, should you forget some time to subscribe to UIApplicationWillResignActiveNotification in multitasking-enabled devices and apps or to UIApplicationWillTerminateNotification on non-multitasking-enabled devices, multitasking-enabled devices running pre-iOS4 or non-multitasking-enabled apps. (Only one of them is called, based on the distinction above; therefore, to keep your code generic and friendly even with no-multitask but iOS4-capable devices like the iPhone 3G or iPod touch 2G, register to both.)

RootViewController instantiates the class DetailController to present its detailed viewer (and editor) when selecting any of the list elements (in didSelectRowAtIndexPath). In addition to passing a reference to itself (so that DetailController can directly access the properties in RootViewController), we also pass the selection index as myDataWrapperArrayIndex. This means it'll be DetailController that reads the selected record from the array of list records and we do not do it in RootViewController, making parameter passing a nightmare (because of the sheer number of them).

DetailController is pretty straightforward. First, it allows for scrolling text edit fields up so that the on-screen keyboard doesn't hide them. Note that I, in order to keep the code as clean and short as possible, haven't implemented dynamic position and keyboard height enquire but scroll up the bottom-most four fields with 155 pixels (topmost fields, as they aren't mapped out, aren't scrolled). When the text editing ends (e.g., the user switches to another field or taps “Enter” on the virtual keyboard), we scroll back (see textFieldDidEndEditing). This is a barebone solution; the other, prettier, dynamic solutions would have taken far too much code.

viewDidLoad, as you may have guessed based on the discussion of “myDataWrapperArrayIndex” in the root controller's didSelectRowAtIndexPath, directly accesses the array storing the list values to fill in the initial values of text edit fields. There isn't anything fancy with this.

When the detailed view is hidden (the user taps the back button), in viewDidDisappear, we read the actual values of all text input fields and store them in the properties of a MyDataWrapper instance. (You've already seen what it's used for in RootViewController). When all fields are initialized, the new MyDataWrapper instance is directly written back to the root controller's array of list elements via the back reference initially passed to the detailed view's constructor. Note that, here, we do not update the camera plist; if the user returns from a page without tapping the Save button, the plist will be left intact.

The plist update, as has just been mentioned, takes place when the Save button is tapped. The Save button is displayed and its action callback is registered at the end of viewDidLoad (see UIBarButtonItem *addButton = [[UIBarButtonItem alloc] initWithBarButtonSystemItem:UIBarButtonSystemItemSave target:self action:@selector(writeDataToSystemFile)]; self.navigationItem.rightBarButtonItem = addButton;). The writeDataToSystemFile method just calls writeDataToSystemFile in SystemPlistContentWrapper in a similar way as was done from the picker view.

Third tab

Finally, the third tab is very simple. The only code I've added to viewDidLoad in the VC of the tab, ThirdAboutTabViewController, is a plain [UIWebView loadHTMLString] call with an NSString literal with some formatted HTML help.

Icons

I've got the app icon from HERE and the tab bar icons from HERE. If you're working on a similarly free app, these sites will be a gold-mine for you.

What next?

In a day or two, I'll release something that the folks are also eagerly awaiting: exactly the same utility, but tailored for the iPhone 3G S, greatly enhancing its video capabilities! Stay tuned!

 

Master your iPhone in one minute a day: Sign up here to get our FREE Tip of the Day delivered right to your inbox.

Topics

Author Details

Author Details

Werner Ruotsalainen

<p>Werner Ruotsalainen is an iOS and Java programming lecturer who is well-versed in programming, hacking, operating systems, and programming languages. Werner tries to generate unique articles on subjects not widely discussed. Some of his articles are highly technical and are intended for other programmers and coders.</p>
<p>Werner also is interested in photography and videography. He is a frequent contributor to not only mobile and computing publications, but also photo and video forums. He loves swimming, skiing, going to the gym, and using his iPads. English is one of several languages he speaks.</p>