When deadlines loom, even skilled and experienced programmers can get a little sloppy. The pressure to ship may cause them to cut corners and look for a quick and easy solution, even if that solution is sure to cause trouble later on. Eventually, their coding style devolves into copy and paste programming, a lamentable tactic that involves cherry-picking snippets of code from a past project and putting them to use in the current one. Of course, the proper solution is to factor out the code into some kind of reusable library, but due to time constraints, it’s simply duplicated wherever it’s needed. Any bugs in the original code have now spread to a dozen different places in a dozen different projects. It’s an algorithm for chaos.
Yet in the world of iPhone applications, copy and paste programming seems to be disturbingly common. The fact that so many iPhone apps are short-term, one-off projects doesn’t help, but the situation has been aggravated even more by Apple’s security restrictions. In particular, dynamic linking to any shared library that doesn’t ship with the OS is strictly forbidden. One could argue that this rule is a necessary side-effect of the iPhone’s sandboxing security model, but even workarounds such as consolidating code into a static shared library are extraordinarily difficult. Another contributing factor is that the iPhone API is still relatively immature, and developers too often require custom code to fill in its gaps.
This situation has transformed more than a few iPhone programmers into copy and paste programmers. When they inevitably encounter some limitation with the iPhone API, the typical response is:
- Search online for a solution
- Find a snippet of code somewhere that gets the job done (usually at Stack Overflow or iPhone Dev SDK)
- Copy and paste the snippet into their project
- Move on to the next problem
Now imagine what happens when a thousand iPhone developers find the same snippet. Suddenly the problems of copy and paste programming have gone global. Offline, a bug in a single snippet of code may infect a dozen projects; online, it can spread to thousands.
As a reluctant copy and paste iPhone programmer myself, I’ve witnessed this scenario first-hand. I recently encountered a limitation with a certain iPhone class—UIImage
—and I found in a discussion forum what seemed to be a popular, well-regarded solution. The code snippet was the first hit in a Google search, and many readers had replied with thanks to its author. However, a bit of testing showed that it worked for most images but completely failed for others. By the time I stumbled upon it, the buggy code had probably spread to countless programs already.
In the process of finding the bug and posting the fix, I ended up writing a substantial amount of additional code to address various other annoyances related to UIImage
. The complete listing is available for download below. Though it won’t solve the copy and paste problem, it should be a welcome remedy for other iPhone developers who have run into similar obstacles.
Background
Programming for the iPhone, a highly graphical device, necessarily involves a substantial amount of image manipulation. Its SDK therefore provides an abstraction called UIImage
that handles much of the effort in importing and drawing images. For example, imagine you want to load a JPEG file, scale it down to icon size, and give it rounded corners. These tasks may require tens or even hundreds of lines of code on other platforms, but on the iPhone, it’s only a matter of instantiating a UIImage
, passing it to a UIImageView
of the appropriate size, and setting the cornerRadius
property.
Despite its ease of use, or perhaps because of it, UIImage
suffers from some serious limitations. Key among these is its lack of support for resizing the image, a feature that is normally handled dynamically by its companion, the UIImageView
component. However, should an iPhone application need to reduce the size of an image for storage or for exchange with an external entity (such as a web server), the UIImage
class is insufficient.
Of course, UIImage
is not the only means of image manipulation on the iPhone. It ships with a rather sophisticated graphics API, known as Quartz 2D, that offers low-level control of bitmap data. Clearly, the functionality for resizing an image exists, although taking advantage of it is not straightforward and requires the developer to write non-trivial code. How best to accomplish this task has been the source of much confusion and debate, particularly in forums such as iPhone Dev SDK:
- Resizing a photo to a new UIImage
This is crazy. I know there are threads that touch on this already, but none of them have led me to the answer. I can’t believe that it is really this difficult!
- Resize Image High Quality
I have done lots of searching for a way to resize images via the iPhone SDK and I have come across a few methods which
work
but the resulting image does not look nearly as good as if you took the full resolution image and told it to draw inside a rectangle.
These discussions have resulted in countless code snippets that claim to resize a UIImage
, but many of them contain bugs, or they simply leave out functionality such as EXIF orientation support, an absolute necessity when dealing with photographs taken by the iPhone’s camera. For instance, a particularly popular code snippet for UIImage
resizing incorrectly processes alpha information, resulting in a pink tint for certain image files.
Image resized correctly
Image resized with buggy code
A Better Way to Resize Images
The following sections describe yet another collection of source code for resizing UIImage
objects. Functionally, it is similar to code samples that can be found elsewhere on the Internet in discussion forums and blogs, but it consolidates their features into a self-contained, reusable package and offers several notable improvements:
- Additional methods for cropping images, generating thumbnails, and more.
- Implemented as Objective-C categories to facilitate reuse. With categories, you can simply plop the code into your project, import a header file, and all of your
UIImage
objects will automatically have access to the new methods. - Bugs that commonly plague other code of this type have been found and fixed. The categories have been vetted in a large, real-world iPhone app, and they contain no known bugs.
- The code has been simplified as much as possible and is more thoroughly documented.
The source code to the categories can be downloaded from the links below or as a single archive. If you are an experienced iPhone programmer, you can probably grab the files and start using them right away. Continue reading for more detail on how to apply them, as well as a run-down of the problems that prompted their creation.
- UIImage+Resize.h, UIImage+Resize.m
- Extends the UIImage class to support resizing (optionally preserving the original aspect ratio), cropping, and generating thumbnails.
- UIImage+RoundedCorner.h, UIImage+RoundedCorner.m
- Extends the UIImage class to support adding rounded corners to an image.
- UIImage+Alpha.h, UIImage+Alpha.m
- Extends the UIImage class with helper methods for working with alpha layers (transparencies).
UIImage+Alpha
The Alpha
category is perhaps not as directly useful as the others, though it provides some necessary functionality that they build upon. Its methods include:
- (BOOL)hasAlpha;
- Tells whether the image has an alpha layer.
- (UIImage *)imageWithAlpha;
- Returns a copy of the image, adding an alpha channel if it doesn’t already have one. An alpha is required when adding transparent regions (e.g., rounded corners) to an image. It may also be necessary when loading certain kinds of image files that are not directly supported by Quartz 2D. For example, if you load a JPEG using
imageNamed:
, the resultingUIImage
will have 32 bits per pixel with the first 8 bits unused (kCGImageAlphaNoneSkipFirst
). But if you take the same image and save it in BMP format, and load it exactly the same way, theUIImage
will have 24 bits per pixel (kCGImageAlphaNone
), which is unsupported in Quartz 2D. Trying to render it to a graphics context will cause run-time errors. The obvious way around this problem is to make sure you only load image files that produce a Quartz-compatible pixel format. (A complete list is available in theSupported Pixel Formats
section of the Quartz 2D Programming Guide.) If for some reason this is not possible, adding an alpha channel to theUIImage
at runtime may also work. - (UIImage *)transparentBorderImage:(NSUInteger)borderSize;
- Returns a copy of the image with a transparent border of the given size added around its edges. This solves a special problem that occurs when rotating a
UIImageView
using Core Animation: Its borders look incredibly ugly. There’s simply no antialiasing around the view’s edges. Luckily, adding a one-pixel transparent border around the image fixes the problem. The extra border moves the visible edges of the image to the inside, and because Core Animation interpolates all inner pixels during rotation, the image’s borders will magically become antialiased. This trick also works for rotating aUIButton
that has a custom image. The following before-and-after video shows the technique in action. (The top square is the original image; the bottom square has a one-pixel transparent border.)
[flashvideo file=http://vocaro.com/trevor/blog/wp-content/uploads/2009/10/Jaggies-with-Core-Animation-rotation.mp4 repeat=always screencolor=0xFFFFFF width=222 height=450 /]
UIImage+RoundedCorner
With the release of iPhone OS 3.0, a new Core Animation feature called cornerRadius
became available. When applied to a layer, it makes the corners soft and round, just the thing for achieving a Web 2.0 or Mac OS X look-and-feel. For example, if you have a UIButton
with a custom image like this:
And add a couple lines of code:
button.layer.cornerRadius = 30; button.layer.masksToBounds = YES;
You get this:
The fun stops there. The cornerRadius
setting only affects the run-time appearance of the layer. As soon as you save the image or send it over the network, the rounded corners go away. Also, if you animate the layer, perhaps by making it rotate, the cornerRadius
property mysteriously reverts to zero, giving the image sharp corners again. This is a confirmed bug (#7235852) in iPhone OS 3.0 and 3.1.
To fix this problem, the RoundedCorner
category can apply rounded corners to a UIImage
permanently. It modifies the image data itself, adding an alpha layer if necessary. Not only does this work around the Core Animation bug, it also preserves the rounded corner effect when exporting the UIImage
to a file or network stream, assuming that the output format supports transparency.
The category exposes a single method:
- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize;
- Creates a copy of the image, adding rounded corners of the specified radius. If
borderSize
is non-zero, a transparent border of the given size will also be added. (The primary purpose of this parameter is to work around the aforementioned aliasing problem that occurs when rotating an image view.) The implementation is based on code by Björn Sållarp.
UIImage+Resize
Resizing a UIImage
is more complicated than it may seem. First, there’s simply the matter of learning Quartz 2D—a somewhat complex, low-level API. A mistake in a single parameter can suddenly affect thousands of pixels, yielding unexpected results like the pink tint problem shown previously.
Another issue to consider is the quality of the resulting image. By default, Quartz 2D applies a fast but not-so-high-quality interpolation algorithm when scaling images up or down. The effect is especially noticeable when reducing an image to a very small size, perhaps for an icon or thumbnail representation. The aliasing caused by the algorithm transforms smooth lines into jagged edges. Faces become a pixelated mess.
To illustrate, the following image is the result of squeezing a 1024×516-pixel JPEG (courtesy of PD Photo) into a 320×200-pixel UIImageView
with automatic resizing enabled:
Note the serrated edges along the wings. To counteract the unsightliness, Quartz 2D can be configured for a different scaling algorithm by calling CGContextSetInterpolationQuality
. Here is the same image, pre-scaled using the kCGInterpolationHigh
option, and displayed in the same UIImageView
:
The jaggies are now gone, replaced with smoother, cleaner lines.
Yet another obstacle, one of particular importance to iPhone developers, is image orientation. When a user takes a snapshot with the iPhone’s camera, the image is not upright but is in fact rotated 90 degrees counterclockwise. The reason is because the iPhone’s camera is positioned in a way that makes up
(from the lens’s perspective) point to the left-hand side of the camera. The iPhone’s camera software knows this and therefore adds a special flag to the image data that indicates how the pixels should be rotated to produce the correct orientation. The software employs the same tactic when the user takes a picture in landscape mode (i.e., holding the phone sideways). It can rotate the image without having to apply a transformation across millions of pixels. Instead, it simply changes the orientation flag. Components such as UIImageView
automatically read this flag—stored in the imageOrientation
property of UIImage
—and apply the proper rotation at run-time when displaying the image.
Unfortunately, as soon as you dip into the low-level Quartz 2D API, which has no knowledge of the high-level UIImage
class, the orientation information is lost. An image resize algorithm written using this API will need to be provided with the orientation and perform the rotation explicitly.
The Resize
category solves each of these problems while incorporating additional handy features. Its methods include:
- (UIImage *)croppedImage:(CGRect)bounds;
- Returns a copy of the image that is cropped to the given bounds. The bounds will be adjusted using
CGRectIntegral
, meaning that any fractional values will be converted to integers. - (UIImage *)thumbnailImage:(NSInteger)thumbnailSize transparentBorder:(NSUInteger)borderSize cornerRadius:(NSUInteger)cornerRadius interpolationQuality:(CGInterpolationQuality)quality;
- Returns a copy of the image reduced to the given thumbnail dimensions. If the image has a non-square aspect ratio, the longer portion will be cropped. If
borderSize
is non-zero, a transparent border of the given size will also be added. (The primary purpose of this parameter is to work around the aforementioned aliasing problem that occurs when rotating an image view.) Finally, thequality
parameter determines the amount of antialiasing to perform when scaling the image. - (UIImage *)resizedImage:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality;
- Returns a resized copy of the image. The
quality
parameter determines the amount of antialiasing to perform when scaling the image. Note that the image will be scaled disproportionately if necessary to fit the specified bounds. In other words, the aspect ratio is not preserved.This method, as well as all other methods described here that perform resizing, takes into account the orientation of the
UIImage
and transforms the pixels accordingly. The resulting image’s orientation will be up (UIImageOrientationUp
), regardless of the current orientation value. The code to perform this transformation is based in part on the following sources: - (UIImage *) resizedImageWithContentMode:(UIViewContentMode)contentMode bounds:(CGSize)bounds interpolationQuality:(CGInterpolationQuality)quality;
UIImageView
offers a remarkably helpful ability: It can resize displayed images while preserving their aspect ratio. The manner of preservation depends on a setting known as the content mode. For example, if a large JPEG (courtesy of PD Photo) is displayed in a small view with the content mode set to Center (UIViewContentModeCenter
), only a portion of the image is visible:To include the entire image, the view’s content can be scaled to fit within the bounds (
UIViewContentModeScaleToFill
). Note that Scale To Fill does not preserve the aspect ratio, resulting in a squashed image:To scale the image without changing the aspect ratio, one option is to shrink the content until it fits entirely within the bounds (
UIViewContentModeScaleAspectFit
). Although this option shows the full image, it has the side-effect of not filling the entire view:(Note that any area not covered by the image in Aspect Fill mode is actually transparent. It’s colored gray here to show the view boundary.)
Another aspect-preserving option is to shrink the content just enough to fit the smaller dimension within the view. The larger dimension (in this case, the length) will be cropped:
The correct choice of content mode depends, of course, on the desired appearance and the nature of the source image.
Because these modes are so useful, equivalent functionality has been rolled into the
Resize
category. Scale To Fill is the default behavior ofresizedImage:interpolationQuality:
, whileresizedImageWithContentMode:
supports both Aspect Fit and Aspect Fill. (Other content modes, such as Left and Bottom Right, were left unimplemented because they are rarely used.)
License
All code presented here is free for both personal and commercial use, with or without modification. No warranty is expressed or implied.
Actually, I might revise my opinion. I don’t think the issue has to do with 64-bit, but as MikoÅ‚aj says, it seems to be related to an invalid context.
When I set a breakpoint on CGPostError, I can at least inspect what’s going on, but as I’m sure you know, these kinds of CoreGraphics issues are very difficult to debug, and so far I haven’t gotten anywhere. All arguments and values appear to be valid.
As I found this issue is related to unsupported parameter combination – see table “Pixel formats supported for bitmap graphics contexts”
kCGImageAlphaLast value is not supported at all.
But as for me, this code works on devices as expected, but in the simulator no image is drawn.
Here is my Swift port: https://github.com/giacgbj/UIImage-Swift-Extensions .
Feel free to test it and report issues.
Any ideas on porting it to Swift 2.0? Have you seen that around?
Check my swift 2.0 converted version of image extension.
http://codeingwithios.blogspot.in/2015/10/uiimage-category-swift-version.html
I am not sure how to use this code to get an image for my CALayer
The following does NOT work:
UIImage* im = [UIImage imageNamed:@”fileName”];
UIImage* im2= [im resizedImage:CGSizeMake(dim,dim) interpolationQuality:kCGInterpolationMedium ];
lay.contents = (id) im2.CGImage;
It works better now!
I opened the png image file in preview and saved it again without the ALPHA. In addition I had to multiply dim by the scale factor ( i.e. [[UIScreen mainScreen] scale] )
Check my swift 2.0 converted version of image extension.
http://codeingwithios.blogspot.in/2015/10/uiimage-category-swift-version.html
First off – a big THANK YOU, for the code (nice!) but mainly for your words regarding copy-and-paste programming (which I advocated against, these many years now).
I just stepped into tech-leading a team, and inherited a large codebase, full of stack-overflow-copy-paste junk, some of which works really bad, full of errors and inefficiencies, and I’m feeling the full wrath of this coding “style”. In particular… we need lots of image scaling (not for display, but for speedy machine-learning model processing) and I found that our code was squashing images (losing aspect ratio) when scaling – which of course dramatically decreased the quality of .ml model results.
So, thanks again, There are few things I’d like to note though:
1. many of the links in your nice blog entry – are dead now. Can you please re-located the original pages, and fix those links?
2. would you please care to elaborate a little more on scaling in other contexts than display? (we need scaling to memory pixel buffers – for further visual processing and analysis, so we care about speed, and the meaning of “Quality” isn’t the same for us. In some places, the “jaggies” are better than the smoothed rescaled images.
Thanks Motti, glad it was useful to you. Unfortunately I haven’t looked at this code in years and no longer support it.
Hmm… a simple-minded question.
If I have an existing UIImage (read from the user’s Photos library), and I need to scale it down (not for display, but for faster processing) and so I want to create a new UIImage, which is as close as possible to the original (has the same attributes and orientation etc.) but that is scaled down to. let’s say – 40% of original size. Of course I want its aspect ratio preserved. Which of your APIs is the best to use here?
-[UIImage resizedImage:interpolationQuality:] should do that for you.