When deadlines loom, even skilled and experienced programmers can get a little sloppy. The pressure to ship may cause them to cut corners and look for a quick and easy solution, even if that solution is sure to cause trouble later on. Eventually, their coding style devolves into copy and paste programming, a lamentable tactic that involves cherry-picking snippets of code from a past project and putting them to use in the current one. Of course, the proper solution is to factor out the code into some kind of reusable library, but due to time constraints, it’s simply duplicated wherever it’s needed. Any bugs in the original code have now spread to a dozen different places in a dozen different projects. It’s an algorithm for chaos.
Yet in the world of iPhone applications, copy and paste programming seems to be disturbingly common. The fact that so many iPhone apps are short-term, one-off projects doesn’t help, but the situation has been aggravated even more by Apple’s security restrictions. In particular, dynamic linking to any shared library that doesn’t ship with the OS is strictly forbidden. One could argue that this rule is a necessary side-effect of the iPhone’s sandboxing security model, but even workarounds such as consolidating code into a static shared library are extraordinarily difficult. Another contributing factor is that the iPhone API is still relatively immature, and developers too often require custom code to fill in its gaps.
This situation has transformed more than a few iPhone programmers into copy and paste programmers. When they inevitably encounter some limitation with the iPhone API, the typical response is:
- Search online for a solution
- Find a snippet of code somewhere that gets the job done (usually at Stack Overflow or iPhone Dev SDK)
- Copy and paste the snippet into their project
- Move on to the next problem
Now imagine what happens when a thousand iPhone developers find the same snippet. Suddenly the problems of copy and paste programming have gone global. Offline, a bug in a single snippet of code may infect a dozen projects; online, it can spread to thousands.
As a reluctant copy and paste iPhone programmer myself, I’ve witnessed this scenario first-hand. I recently encountered a limitation with a certain iPhone class—UIImage
—and I found in a discussion forum what seemed to be a popular, well-regarded solution. The code snippet was the first hit in a Google search, and many readers had replied with thanks to its author. However, a bit of testing showed that it worked for most images but completely failed for others. By the time I stumbled upon it, the buggy code had probably spread to countless programs already.
In the process of finding the bug and posting the fix, I ended up writing a substantial amount of additional code to address various other annoyances related to UIImage
. The complete listing is available for download below. Though it won’t solve the copy and paste problem, it should be a welcome remedy for other iPhone developers who have run into similar obstacles.
Background
Programming for the iPhone, a highly graphical device, necessarily involves a substantial amount of image manipulation. Its SDK therefore provides an abstraction called UIImage
that handles much of the effort in importing and drawing images. For example, imagine you want to load a JPEG file, scale it down to icon size, and give it rounded corners. These tasks may require tens or even hundreds of lines of code on other platforms, but on the iPhone, it’s only a matter of instantiating a UIImage
, passing it to a UIImageView
of the appropriate size, and setting the cornerRadius
property.
Despite its ease of use, or perhaps because of it, UIImage
suffers from some serious limitations. Key among these is its lack of support for resizing the image, a feature that is normally handled dynamically by its companion, the UIImageView
component. However, should an iPhone application need to reduce the size of an image for storage or for exchange with an external entity (such as a web server), the UIImage
class is insufficient.
Of course, UIImage
is not the only means of image manipulation on the iPhone. It ships with a rather sophisticated graphics API, known as Quartz 2D, that offers low-level control of bitmap data. Clearly, the functionality for resizing an image exists, although taking advantage of it is not straightforward and requires the developer to write non-trivial code. How best to accomplish this task has been the source of much confusion and debate, particularly in forums such as iPhone Dev SDK:
- Resizing a photo to a new UIImage
This is crazy. I know there are threads that touch on this already, but none of them have led me to the answer. I can’t believe that it is really this difficult!
- Resize Image High Quality
I have done lots of searching for a way to resize images via the iPhone SDK and I have come across a few methods which
work
but the resulting image does not look nearly as good as if you took the full resolution image and told it to draw inside a rectangle.
These discussions have resulted in countless code snippets that claim to resize a UIImage
, but many of them contain bugs, or they simply leave out functionality such as EXIF orientation support, an absolute necessity when dealing with photographs taken by the iPhone’s camera. For instance, a particularly popular code snippet for UIImage
resizing incorrectly processes alpha information, resulting in a pink tint for certain image files.
Image resized correctly
Image resized with buggy code
A Better Way to Resize Images
The following sections describe yet another collection of source code for resizing UIImage
objects. Functionally, it is similar to code samples that can be found elsewhere on the Internet in discussion forums and blogs, but it consolidates their features into a self-contained, reusable package and offers several notable improvements:
- Additional methods for cropping images, generating thumbnails, and more.
- Implemented as Objective-C categories to facilitate reuse. With categories, you can simply plop the code into your project, import a header file, and all of your
UIImage
objects will automatically have access to the new methods. - Bugs that commonly plague other code of this type have been found and fixed. The categories have been vetted in a large, real-world iPhone app, and they contain no known bugs.
- The code has been simplified as much as possible and is more thoroughly documented.
The source code to the categories can be downloaded from the links below or as a single archive. If you are an experienced iPhone programmer, you can probably grab the files and start using them right away. Continue reading for more detail on how to apply them, as well as a run-down of the problems that prompted their creation.
- UIImage+Resize.h, UIImage+Resize.m
- Extends the UIImage class to support resizing (optionally preserving the original aspect ratio), cropping, and generating thumbnails.
- UIImage+RoundedCorner.h, UIImage+RoundedCorner.m
- Extends the UIImage class to support adding rounded corners to an image.
- UIImage+Alpha.h, UIImage+Alpha.m
- Extends the UIImage class with helper methods for working with alpha layers (transparencies).
UIImage+Alpha
The Alpha
category is perhaps not as directly useful as the others, though it provides some necessary functionality that they build upon. Its methods include:
- (BOOL)hasAlpha;
- Tells whether the image has an alpha layer.
- (UIImage *)imageWithAlpha;
- Returns a copy of the image, adding an alpha channel if it doesn’t already have one. An alpha is required when adding transparent regions (e.g., rounded corners) to an image. It may also be necessary when loading certain kinds of image files that are not directly supported by Quartz 2D. For example, if you load a JPEG using
imageNamed:
, the resultingUIImage
will have 32 bits per pixel with the first 8 bits unused (kCGImageAlphaNoneSkipFirst
). But if you take the same image and save it in BMP format, and load it exactly the same way, theUIImage
will have 24 bits per pixel (kCGImageAlphaNone
), which is unsupported in Quartz 2D. Trying to render it to a graphics context will cause run-time errors. The obvious way around this problem is to make sure you only load image files that produce a Quartz-compatible pixel format. (A complete list is available in theSupported Pixel Formats
section of the Quartz 2D Programming Guide.) If for some reason this is not possible, adding an alpha channel to theUIImage
at runtime may also work. - (UIImage *)transparentBorderImage:(NSUInteger)borderSize;
- Returns a copy of the image with a transparent border of the given size added around its edges. This solves a special problem that occurs when rotating a
UIImageView
using Core Animation: Its borders look incredibly ugly. There’s simply no antialiasing around the view’s edges. Luckily, adding a one-pixel transparent border around the image fixes the problem. The extra border moves the visible edges of the image to the inside, and because Core Animation interpolates all inner pixels during rotation, the image’s borders will magically become antialiased. This trick also works for rotating aUIButton
that has a custom image. The following before-and-after video shows the technique in action. (The top square is the original image; the bottom square has a one-pixel transparent border.)
[flashvideo file=http://vocaro.com/trevor/blog/wp-content/uploads/2009/10/Jaggies-with-Core-Animation-rotation.mp4 repeat=always screencolor=0xFFFFFF width=222 height=450 /]
UIImage+RoundedCorner
With the release of iPhone OS 3.0, a new Core Animation feature called cornerRadius
became available. When applied to a layer, it makes the corners soft and round, just the thing for achieving a Web 2.0 or Mac OS X look-and-feel. For example, if you have a UIButton
with a custom image like this:
And add a couple lines of code:
button.layer.cornerRadius = 30; button.layer.masksToBounds = YES;
You get this:
The fun stops there. The cornerRadius
setting only affects the run-time appearance of the layer. As soon as you save the image or send it over the network, the rounded corners go away. Also, if you animate the layer, perhaps by making it rotate, the cornerRadius
property mysteriously reverts to zero, giving the image sharp corners again. This is a confirmed bug (#7235852) in iPhone OS 3.0 and 3.1.
To fix this problem, the RoundedCorner
category can apply rounded corners to a UIImage
permanently. It modifies the image data itself, adding an alpha layer if necessary. Not only does this work around the Core Animation bug, it also preserves the rounded corner effect when exporting the UIImage
to a file or network stream, assuming that the output format supports transparency.
The category exposes a single method:
- (UIImage *)roundedCornerImage:(NSInteger)cornerSize borderSize:(NSInteger)borderSize;
- Creates a copy of the image, adding rounded corners of the specified radius. If
borderSize
is non-zero, a transparent border of the given size will also be added. (The primary purpose of this parameter is to work around the aforementioned aliasing problem that occurs when rotating an image view.) The implementation is based on code by Björn Sållarp.
UIImage+Resize
Resizing a UIImage
is more complicated than it may seem. First, there’s simply the matter of learning Quartz 2D—a somewhat complex, low-level API. A mistake in a single parameter can suddenly affect thousands of pixels, yielding unexpected results like the pink tint problem shown previously.
Another issue to consider is the quality of the resulting image. By default, Quartz 2D applies a fast but not-so-high-quality interpolation algorithm when scaling images up or down. The effect is especially noticeable when reducing an image to a very small size, perhaps for an icon or thumbnail representation. The aliasing caused by the algorithm transforms smooth lines into jagged edges. Faces become a pixelated mess.
To illustrate, the following image is the result of squeezing a 1024×516-pixel JPEG (courtesy of PD Photo) into a 320×200-pixel UIImageView
with automatic resizing enabled:
Note the serrated edges along the wings. To counteract the unsightliness, Quartz 2D can be configured for a different scaling algorithm by calling CGContextSetInterpolationQuality
. Here is the same image, pre-scaled using the kCGInterpolationHigh
option, and displayed in the same UIImageView
:
The jaggies are now gone, replaced with smoother, cleaner lines.
Yet another obstacle, one of particular importance to iPhone developers, is image orientation. When a user takes a snapshot with the iPhone’s camera, the image is not upright but is in fact rotated 90 degrees counterclockwise. The reason is because the iPhone’s camera is positioned in a way that makes up
(from the lens’s perspective) point to the left-hand side of the camera. The iPhone’s camera software knows this and therefore adds a special flag to the image data that indicates how the pixels should be rotated to produce the correct orientation. The software employs the same tactic when the user takes a picture in landscape mode (i.e., holding the phone sideways). It can rotate the image without having to apply a transformation across millions of pixels. Instead, it simply changes the orientation flag. Components such as UIImageView
automatically read this flag—stored in the imageOrientation
property of UIImage
—and apply the proper rotation at run-time when displaying the image.
Unfortunately, as soon as you dip into the low-level Quartz 2D API, which has no knowledge of the high-level UIImage
class, the orientation information is lost. An image resize algorithm written using this API will need to be provided with the orientation and perform the rotation explicitly.
The Resize
category solves each of these problems while incorporating additional handy features. Its methods include:
- (UIImage *)croppedImage:(CGRect)bounds;
- Returns a copy of the image that is cropped to the given bounds. The bounds will be adjusted using
CGRectIntegral
, meaning that any fractional values will be converted to integers. - (UIImage *)thumbnailImage:(NSInteger)thumbnailSize transparentBorder:(NSUInteger)borderSize cornerRadius:(NSUInteger)cornerRadius interpolationQuality:(CGInterpolationQuality)quality;
- Returns a copy of the image reduced to the given thumbnail dimensions. If the image has a non-square aspect ratio, the longer portion will be cropped. If
borderSize
is non-zero, a transparent border of the given size will also be added. (The primary purpose of this parameter is to work around the aforementioned aliasing problem that occurs when rotating an image view.) Finally, thequality
parameter determines the amount of antialiasing to perform when scaling the image. - (UIImage *)resizedImage:(CGSize)newSize interpolationQuality:(CGInterpolationQuality)quality;
- Returns a resized copy of the image. The
quality
parameter determines the amount of antialiasing to perform when scaling the image. Note that the image will be scaled disproportionately if necessary to fit the specified bounds. In other words, the aspect ratio is not preserved.This method, as well as all other methods described here that perform resizing, takes into account the orientation of the
UIImage
and transforms the pixels accordingly. The resulting image’s orientation will be up (UIImageOrientationUp
), regardless of the current orientation value. The code to perform this transformation is based in part on the following sources: - (UIImage *) resizedImageWithContentMode:(UIViewContentMode)contentMode bounds:(CGSize)bounds interpolationQuality:(CGInterpolationQuality)quality;
UIImageView
offers a remarkably helpful ability: It can resize displayed images while preserving their aspect ratio. The manner of preservation depends on a setting known as the content mode. For example, if a large JPEG (courtesy of PD Photo) is displayed in a small view with the content mode set to Center (UIViewContentModeCenter
), only a portion of the image is visible:To include the entire image, the view’s content can be scaled to fit within the bounds (
UIViewContentModeScaleToFill
). Note that Scale To Fill does not preserve the aspect ratio, resulting in a squashed image:To scale the image without changing the aspect ratio, one option is to shrink the content until it fits entirely within the bounds (
UIViewContentModeScaleAspectFit
). Although this option shows the full image, it has the side-effect of not filling the entire view:(Note that any area not covered by the image in Aspect Fill mode is actually transparent. It’s colored gray here to show the view boundary.)
Another aspect-preserving option is to shrink the content just enough to fit the smaller dimension within the view. The larger dimension (in this case, the length) will be cropped:
The correct choice of content mode depends, of course, on the desired appearance and the nature of the source image.
Because these modes are so useful, equivalent functionality has been rolled into the
Resize
category. Scale To Fill is the default behavior ofresizedImage:interpolationQuality:
, whileresizedImageWithContentMode:
supports both Aspect Fit and Aspect Fill. (Other content modes, such as Left and Bottom Right, were left unimplemented because they are rarely used.)
License
All code presented here is free for both personal and commercial use, with or without modification. No warranty is expressed or implied.
Well, this is an interesting read. Thanks for sharing your code…
really great stuff, thanks for the contribution…
Thanks for this info and code. However, as far as I can tell, croppedImage: still suffers from the rotation issue.
True; I didn’t bother with handling the
imageRotation
setting incroppedImage
. I changed the method to include a source code comment explaining this.If anyone would like to contribute an improved
croppedImage
function that fixes this oversight, I’d be happy to include it in the distribution.Using some code from Robert Clark’s niftybean.com blog, and a minor generalization to the private resizeImage:transform:… helper method, I’ve fixed this oversight. It’s unfortunate that the original UIImage’s imageRotation setting can’t be kept in the new UIImage, since that would save a rotation step. Code is in my blog post.
Pierre H., room1337.com
FYI: Your link to the UIImage+Alpha.m file is wrong.
Fixed. Thanks for reporting that!
These were a lifesaver, thanks man!
Thank you! This is the kind of stuff I’ve been looking for. I especially love the built-in transparent pixel border to avoid jaggies.
Your code does not appear to support images in GIF format. More specifically, here is the error I get when I try GIF. It does seem to work fine with JPEG AND PNG however.
It is not my code that doesn’t support GIF but rather Quartz 2D, which only supports specific pixel formats.
To work around the problem, simply convert the 8-bit GIF to a 24-bit PNG. The PNG will also need an alpha layer, or you can add one at run-time using
imageWithAlpha
.Really, really appreciate this post! Fantastic info.
I think I might be missing some key concept, though. The following doesn’t seem to work.
UIImage *thumbnail = [image thumbnailImage:44 transparentBorder:0 cornerRadius:0 interpolationQuality:kCGInterpolationLow];
Is there code I need to put around this single line? UIGraphicsBeginImageContext() or something? In the debugger, thumbnail._imageRef is always nil after the call.
Any suggestions?
That code should work fine without any additions. In fact, I just tested it and it worked. If you’re getting a nil return value, make sure that:
1. The caller is not nil (of course).
2. You are not getting any Core Graphics runtime errors in the console, like the ones Mark Lenard encountered above.
Great read! I was really looking for something like this. I am having issues in my app with storing/retrieving images from a sqlite db and it taking too much memory/time to do these jobs. I hope this will help me resize the image to a smaller size ( dont need full screen size image). And the cropping will be a nice addition 🙂
Thanks!
Thank you so much for this post; the resize functionality is just what I have been searching for! It works flawlessly as far as I’ve explored. Really nice work on all accounts, nice one.
Further thanks for providing such a detailed abstract of the problem and your insight. Much appreciated.
Regards.
Amazing article and UIImage categories. I’ve been through so many versions of the “Scale & Resize” methods from various places on the web and they all had problems or didn’t handle certain situations well.
My particular problem with most was they were not thread safe as they used UI functions instead of CG ones.
This is by far the best solution to the lack of UIImage functionality in the iPhone SDK. Massive thanks to Trevor for this.
If I resize an image and save it to the disk, then open that image and do another resize I get the error:
CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; …
I was able to get rid of my error by changing
CGBitmapContextCreate(…
kCGImageAlphaPremultipliedLast);
instead of CGImageGetBitmapInfo(imageRef)
in UIImage+Resize. I’m not sure if the other files need the fix
Caveat to the above, the settings in original format if I saved the file using UIImageJPEGRepresentation, I was having difficulties with the UIImagePNGRepresentation, then trying to resize.
Not sure what’s up.
You seem to be running into the same problem that Mark Lenard did (see the comments above). You’re saving the image in a pixel format that is unsupported by Quartz 2D. Hard-coding the CGImageAlphaInfo parameter to be PremultipliedLast happens to work around the problem in your situation but of course this is not a universal fix. You need to make sure that the UIImage you pass to the resize methods has a supported pixel format.
[…] 14, 2010 in Uncategorized from the “Resize a UIImage the right way†post on Trevor’s Bike Shed Trevor says: December 24, 2009 at 9:18 pm True; I didn’t bother with […]
Hi – great code – thanks!
I’m using UIViewContentModeScaleAspectFill to fill the size I need and crop the rest off the image. When I try to crop a landscape image to a portrait size (320 x480) I notice that the cropped section is from the left of the image. Is there any way to adapt this code to specify that the crop should be made from the centre of the image instead?
If you don’t want to crop from the left side, simply specify a positive number for the origin.x component of the bounds parameter. For example, a bounds of CGRectMake(150, 230, 20, 20) will crop around the center of your image.
I think I understand what you’re saying, but the bounds takes a CGSize. Unless you mean editing one of the methods in UIImage+Resize? If so, which? 🙂
I’m using this on both my landscape and portraits right now:
[image resizedImageWithContentMode:UIViewContentModeScaleAspectFill bounds:CGSizeMake(320,426) interpolationQuality:kCGInterpolationDefault];
You were talking about cropping. The croppedImage method takes a CGRect value as the bounds parameter.
Trevor,
Thanks a bunch for the explanations and the code. I didn’t really understand what was happening with UIImages before, and I was having a lot of problems with resizing, saving, and reloading. Things are working now, thanks to your help.
I would have like to have made a donation for the knowledge and extentions code.
I added a PayPal donation button at the end of the post. Thanks!
Hi Trevor,
first of all congrats on a very well written article, that’s both informative and provides great code.
You really got me thinking about the copy/paste programming – I think the fact that’s so widely spread across iPhone programmers is that the iPhone was marketed as a platform where anybody with no programming experience can start fresh and make bunch of millions in few weeks. Lots of people actually believed that and started copy/pasting code to put together an app and get it on the App store as fast as possible.
Your code is great, thanks for sharing. However the Quartz supported formats really seems to be an issue. I get errors with 24b alpha PNG files, it seems hard to get it right 🙂
For the rest of readers here is a direct link to the supported formats:
Supported Pixel Formats
best, Marin
I just donated, thank you for a great article! You saved me a ton of time. I did get confused on one thing which I think marc commented on above as well. After reading the article, I thought the UIViewContentModeScaleAspectFill also cropped the image to the size requested. In case anyone else gets confused on that, here is code that does both aspect fill and center crop:
UIViewContentModeScaleAspectFill, by definition, does not crop. It resizes the image so that it fits within the given bounds, while preserving the aspect ratio. For example, if you have an image that is 300×300 pixels, and you resize it to 600×500 pixels using UIViewContentModeScaleAspectFill, you will get an image that is 500×500 pixels. There is no need to crop it, and in fact, the cropping code you show is incorrect because it doesn’t alter the size of the scaled image.
Thanks for the extremely helpful article. I am attempting to use the thumbnail method and am finding that the right side of the resized image is not rendering correctly (it is weirdly pixelated).
I am making the following call:
[image thumbnailImage:50 transparentBorder:0 cornerRadius:0 interpolationQuality:kCGInterpolationHigh];
Any suggestions as to why this happening would be appreciated.
I’m not able to reproduce this problem. Are you getting the same pixelated results if you use the first image of this post as the source image?
The crop function seems to work for me but the resize does not. I only get a white image after the resize. There seems to be an issue if you first crop the image and then try to scale the image. It gives context errors on the colorspace. After trying to fix that it then generates a scaled image with a grayscale overlay indicating an alpha channel issue. Any ideas?
Tried it with both JPG and PNGs
Using the first image in this post, I did a crop, then resized the cropped image (using the same method and parameters as you), but I found no issues. Can you reproduce your results with that image? If not, then perhaps there’s something wrong with your source image.
Trevor,
Thanks for the excellent article and sample code. It saved us some time this week.
One issue has arisen, though. It seems that certain pngs don’t show up in the simulator after running through the resizer. They show up just fine without the resizer (just letting UIImageView resize them in runtime). And, surprisingly, they show up just fine on an iPhone using the resizer.
Have you run into this situation before? Is it common for some images to not display correctly (or display at all) in the simulator but work fine on an iPhone when running your code?
Thanks!
Jeremy
rade | eccles
Sorry, I’ve never noticed this problem.
I’m having the same issue. Curiously, it’s with a PNG I’m using as the application’s Icon.png.
Did ever find a way to fix this?
What version of Xcode are you using? Also, can you post a link to the image that’s giving you problems?
Yes, same issue here. A 24bit PNG with alpha when resized appears blank. XCode 3.2.3 here
For those struggling to get this working, it seems that it requires doing imageWithAlpha before doing the transparent border or the resizing on a PNG. Once I did that, I was able to get an image that would rotate smoothly and without jagged edges.
UIImage *img = [[UIImage imageNamed:@”photo.png”] imageWithAlpha] transparentBorderImage:1];
Works like butter,
Thank you Trevor.
Yes, in the discussion of imageWithAlpha, I mentioned that it is often useful with JPEGs, but it may be necessary with PNGs as well, depending on their format.
Hi all – I will preface this by saying I am new to iPhone development and am probably doing something wrong.
My question – is there any reason that I can’t use this resize code on an image prior to placing the image into the imageView of a UITableViewCell?
Basically, when I am creating a cell, this works:
[cell.imageView setImage:icon]
whereas this does not:
UIImage *resizedIcon = [icon
resizedImageWithContentMode:UIViewContentModeScaleAspectFit
bounds:CGSizeMake(30, 30)
interpolationQuality:kCGInterpolationHigh];
[cell.imageView setImage:resizedIcon];
… using the exact same PNG image previously loaded from a file using imageFromContentsOfFile.
When I place the resized image in the cell, the cell renders as if there is no image at all.
Thoughts?
I can’t think of any reason why that wouldn’t work, although the fact that you’re using imageFromContentsOfFile, and not imageNamed, and are also new to iPhone development, leads me to think that this may be a memory management problem. Are you sure you’re not releasing a pointer incorrectly?
To simplify troubleshooting, you might try putting your resized icon into a plain old UIImageView rather than a cell, just to see if that works.
Thanks much for replying and being patient with me. My issue ended up being related to the UITableViewCell autoresizing at some unknown point. I subclassed UITableViewCell and implemented layoutSubviews and made my changes there – everything works now.
This code looks great, it is exactly what I was looking for.! Thank you for making it available. I am going to download it and test it out tonight. If it works, I will donate. I am also subscribing to your RSS feed.
Thanks again!
Any little help on how to “install” the classes?? :$
Unfortunately, the mechanisms for shared code are quite limited on the iPhone. Dynamic libraries are prohibited, and although there are ways of doing static libraries, they are basically hacks around the limitations of Xcode. In my experience, the best approach is to simply create a folder in your project (e.g., “Image Utilities”) and add all the class files to it.
Very helpful! +1 Thanks much for this.
Thanks for your work, it works great – mostly.
I am running into the same problem as some previous posters where my app processes everything as if there is actually an image, however, no image is visible in the simulator or the device.
Here is some code:
UIImage *image = [UIImage imageNamed:@"test8.png"]
self.imageView.image = image; //this works as expected
self.imageView.image = [image roundedCornerImage:16 borderSize:0]; //this does not work, the image is now blank
I have found the source of the problem, or at least mine: PNG-24 format pictures work perfectly, however, PNG-8 pictures present the problem I am having. I have tried several different settings for my PNG-8 and none of them will show anything after being processed by roundedCornerImage.
I am unsure how to fix this and it would be very nice to have this functional with PNG-8 due to the dramatic file size differences.
Thanks!
Thanks you a lot for this, it works very well.
I’ve some problems with resize, it doesn’t seems to work with 16b/pixel png (on a real iPhone and on the simulator) while it is state in the Quartz 2D Guide that this pixel format is supported by CGBitmapContextCreate …
You can try with the following image
bart.png
Any idea ?
The only 16-bit, color format supported by Quartz 2D is 16 bpp, 5 bpc, kCGImageAlphaNoneSkipFirst. But when your Bart image is loaded using imageNamed:, it is 48 bpp, 16 bpc, kCGImageAlphaNone. This is why it’s not working. An easy fix is to save the image to a different format, such as an 8 bpp PNG with an alpha channel (either embedded in the file or added at runtime using imageWithAlpha:).
Unfortunately Quartz 2D doesn’t support 8 bpp color images, and I don’t know of any way around that (other than converting the image to PNG-24, of course).
Ok, thanks for your help!
[…] Resize a UIImage the right way (tags: iPhone UIImage) « links for 2010-03-30 | […]
Hi All,
I’ve corrected my problem with some images, (for instance PNG 16 bit), the resize code don’t work neither on simulator nor on a real device.
I’ve replace the following code in the UIImage+Resize.m:
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(imageRef),
0,
CGImageGetColorSpace(imageRef),
CGImageGetBitmapInfo(imageRef));
with
CGContextRef bitmap = CreateCGBitmapContextForWidthAndHeight(newRect.size.width,
newRect.size.height,
NULL,
kDefaultCGBitmapInfoNoAlpha);
This latter function was copied/colled 😉 from here
http://iphonedevelopertips.com/graphics/how-to-resize-scale-an-image-thread-safe-take-2.html
It works perfectly with the simulator and a real device with some 8bit pngs and 16 bit pngs that didn’t work before…
This change will not work in general because it eliminates the alpha channel entirely. It also hard-codes the bits per component at 8, and it specifies a default RGB color space instead of using the CGImage’s colorspace (which can screw up color matching).
Here’s what ought to be a general solution to the issues people are having with alpha channels.
The essential problem is that Quartz 2D on the phone seems to require 32 bits/channel for bitmap contexts. This is fine unless the image you’re copying from has no alpha channel at all, in which case the rest of CGImageGetAlphaInfo() is kCGImageAlphaNone. The result is 24 bits/channel pixels and the oh-so-frustrating “unsupported parameter combination” message. However, we can preserve the fact that these images have no alpha channel and reconcile it with the phone’s desire to deal with 32-bit contexts by converting kCGImageAlphaNone, when encountered, into kCGImageAlphaNoneSkipLast.
CGBitmapInfo bitmapInfo = CGImageGetBitmapInfo(imageRef);
if (CGImageGetAlphaInfo(imageRef) == kCGImageAlphaNone) {
bitmapInfo &= ~kCGBitmapAlphaInfoMask;
bitmapInfo |= kCGImageAlphaNoneSkipLast;
}
Tacking this little snippet into your resizing code (and then using the resulting bitmapInfo in your call to CGBitmapContextCreate()) should fix the issue in a general way. Images with existing alpha channels will end up in identical contexts while images with no alpha info will end up in 32-bit contexts with an ignored component.
Note that a similar problem exists for source images with non-pre-multiplied alpha channels. I’m not sure exactly where such images would come from, though, and I don’t know what the consequences of converting non-pre-multiplied images into a pre-multiplied bitmap context would be, so I’ve ignored this issue in my solution.
given that only certain combinations are supported http://developer.apple.com/mac/library/qa/qa2001/qa1037.html I found it necessary to also deal with kCGImageAlphaLast by changing it to kCGImageAlphaPremultipliedLast…
My images were “save as png” coming from photoshop and turned out to be kCGImageAlphaLast, I haven’t looked into how one can tell how it’s going to be encoded by photoshop.
I also tossed in a brute force resize in case we’re not able to create a bitmap context–with a warning to the console so that the image can be dealt with properly.
Great! Thx for sharing.
How funny. I found the pink version of resizing and then found this right after. Awesomesauce. I am going to rant a bit and then post this link from my blog.
Trevor,
Just curiuos do you have any idea who the graduate is in your example above. I do!
Marc
No idea. Who is it?
[…] Resize a UIImage the right way — Trevor’s Bike Shed Despite its ease of use, or perhaps because of it, UIImage suffers from some serious limitations. Key among these is its lack of support for resizing the image, a feature that is normally handled dynamically by its companion, the UIImageView component. However, should an iPhone application need to reduce the size of an image for storage or for exchange with an external entity (such as a web server), the UIImage class is insufficient. (tags: cocoa development iphone iphonedev) Possibly related posts: (automatically generated)links for 2010-05-08 Categories: Uncategorized Comments (0) Trackbacks (0) Leave a comment Trackback […]
Just a little suggestion.
rename hasAlpha to hasSupportedAlpha and remove kCGImageAlphaFirst and kCGImageAlphaLast from the test so the image will be copied into a supported format even if it has an alpha channel if the type is unsupported.
It seems to be working for me but I’m no expert with quartz.
Well I thought I knew. Looks just like my daughter when she graduated. closer examination by my kids found that the year on the hat is wrong. Still a real coincidence that the resemblance is so strong. everyone got a laugh.
Thanks for the blog very helpful.
Trevor,
Please forgive my simple question about the use of this application code. I have an application that is downloading very large images, they consume to much memory to use as is and just stuff into a view. Is it possible to use your code to create an alternate image that retains the same aspect ratio and boundaries so that the image is unchanged, and then save the image to a file for later use
Thanks again
Hello,
Thank you for sharing the code, I’m not sure how to use the extra methods with my code, I want the users to be able to resize the image (without cropping it).
Any help would be much appreciated, Thank you.
Firstly, thanks for the sharing this code.
I’m using this library to crop and then resize an image, then upload it to a site. The REALLY weird thing is it starts off 1536×2048 (623kb) and AFTER cropping and resizing it is 1000×1000 BUT (965kb)
Sorry but I am really confused. I’m trying to shrink the file to limit the amount of data sent and it just seems to be getting bigger.
Any one else had similar problems, or have any suggestions?
Thanks.
Fantastic code. A really useful method you should consider adding is resizing an image with a maximum dimension. If either the width or height of the image exceeds the specified max, it gets resized without altering the aspect ratio. If the image does not exceed the specified max, it does not get resized.
…and I’m an idiot. 🙂 I’d like to thank everybody for not making fun of me.
Using the code, I added in the ability to do automatic hires scaling for the iPhone 4. This makes the images show up nicer when on the iPhone 4.
http://gist.github.com/478626
One thing I haven’t tested yet, is making sure it works when the original is of Scale 2.0. I will test it soon. But it works great with other images.
Simply fabulous. I replaced my current thumbnail generation method with this much better one! I was in need of a simple way to save the image with rounded edges so that the table didn’t have to call the .layer method for every single cell (and create major lag) and this did the trick! Thanks!!
It would be great if these categories could be updated to handle iOS4’s scale factors with the retina display. Bumped into a problem with the UIImage +croppedImage: method where it wasn’t working properly. Not sure if this is ideal but I tweaked it to this:
– (UIImage *)croppedImage:(CGRect)bounds {
bounds = CGRectMake(bounds.origin.x * self.scale, bounds.origin.y * self.scale, bounds.size.width * self.scale, bounds.size.height * self.scale);
CGImageRef imageRef = CGImageCreateWithImageInRect([self CGImage], bounds);
UIImage *croppedImage;
if ([UIImage respondsToSelector:@selector(imageWithCGImage:scale:orientation:)]) {
croppedImage = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
} else {
croppedImage = [UIImage imageWithCGImage:imageRef];
}
CGImageRelease(imageRef);
return croppedImage;
}
Trevor,
thank you for sharing!
Thank you for this great article, now the pictures in my app look much cooler…
It Works!!
Thank you, thank you, thank you.
I’m using the portion that converts from a high pixel count to a manageable count. I do not understand why you create the new bitmap with zero bytes per row in resizedImage:transform:drawTransposed:interpolationQuality. I’m using 4*newWidth.
If I’m wrong, will you please instruct me?
Dan
I assume you’re referring to the CGBitmapContextCreate call. If you specify 4*newWidth for bytesPerRow, then you are hard-coding for 32 bits-per-pixel. Specifying a value of 0 calculates bytesPerRow automatically, so it works for any BPP value.
Thank you.
I tested with zero and it works. I’m glad you know this. I don’t see it in the CGBitmapContext class reference.
Dan
I appear to have found a bug in your excellent code.
In the UIImage+Alpha Class within the
(UIImage *)transparentBorderImage:(NSUInteger)borderSize method
You create an image object
UIImage *image = [self imageWithAlpha];
However you do not reference it when creating the context
CGContextRef bitmap = CGBitmapContextCreate(NULL,
newRect.size.width,
newRect.size.height,
CGImageGetBitsPerComponent(self.CGImage),
0,
CGImageGetColorSpace(self.CGImage),
CGImageGetBitmapInfo(self.CGImage));
and nor when drawing
CGContextDrawImage(bitmap, imageLocation, self.CGImage);
Shouldn’t you be referencing the image object if you have created an alpha channel when it didn’t have one to begin with.
[…] CGInterpolationQualityã®è©³ç´°ã«ã¤ã„ã¦ã¯Resize a UIImage the right way — Trevor’s Bike ShedãŒå‚考ã«ãªã‚Šã¾ã™ã€‚ […]
The images returned, are they autoreleased or do they need to be released?
The code follows the standard conventions for Objective-C memory management: You must release any object you own; you own any object you create; and you only create objects using methods starting with “alloc†or “new†or containing “copyâ€. Therefore, the images are autoreleased.