Why I Use Lightroom for Editing Wedding Images
During my first several years as a professional photographer, Apple's Aperture was my primary image editing app. Adobe hastily introduced Lightroom soon after Aperture was released, and I kept an eye on it as it was developed. Though I generally loved Aperture, several years later Adobe finally lured me over to Lightroom (with, if I remember correctly, version 3) with the introduction of noise reduction that was far superior to what Aperture could deliver. This was a breakthrough in my workflow, because I could now batch-apply high quality noise reduction (in a non-destructive manner, so I could easily revert and adjust further if needed), as oppose to either using Aperture's inferior built-in noise reduction, or having to go through a process of applying noise reduction in a separate step with a standalone noise reduction app after exporting. I've been with Lightroom ever since.
What about Capture One? I know quite a few photographers who swear by this app, and I've used it occasionally just to be somewhat familiar with it and keep tabs on its development. One attractive thing about C1 is that it's still available as a perpetual license (as opposed to Adobe's purely subscription model). Just like the race between Canon, Nikon, Sony, and Fuji, there are some areas where Capture One excels, other areas where Lightroom is superior, and this changes over time as existing features are refined and new ones added.
For me, though I always keep an eye on what Capture One is offering, Lightroom has continued to retain me as a user. There are several reasons why.
Though I was one of many photographers who was not happy about Adobe's shift to a subscription-only model for their software, I've since moderated this view quite a bit. First and foremost is my reliance on the Creative Cloud syncing feature, which enables me to freely move back and forth between my computer and iPad when editing, without jumping through any hoops. For instance, I can be sitting at my computer editing, and then if I need to head out to a doctor appointment or a meeting with a client and I know I'm going to have some down time, I can grab my iPad and pick up editing where I left off, with those changes automatically being synced to the cloud and then back to my computer in near real time (or, when I return home if I was working somewhere without WiFi). It's a wonderfully useful system!
For this reason alone, a subscription price is at least partially justified (if for nothing else, to cover the service that handles this syncing).
But Adobe has also made a very attractively-priced subscription option for photographers, with a package consisting of Lightroom, Photoshop, and the mobile version of Lightroom (along with a few other apps) currently available for only $120 a year. If this were just for Lightroom, it would be borderline in terms of value, but with Photoshop included as well, it's really a bargain. In any case, I use Creative Cloud syncing so much that I'm pretty much semi-permanently married to Lightroom for this reason alone.
Capture One now has their own iPad app along with some cloud syncing functionality, but it is fairly limited at the time of this writing. Rather than Adobe's bi-directional syncing, Capture One's cloud sync feature appears to be geared toward one specific scenario, a photographer who imports images into their iPad (for instance, while traveling and/or during a shoot), and wants to then transfer those images to their computer. This transfer method is limited to 1000 images at a time (if you have more images that need to be transferred, the prior batch needs to be deleted from the cloud server before more can be uploaded). Images that are on the computer cannot be synced to the iPad, and you can't have an ongoing workflow where edits made on the iPad sync automatically to the computer and vice versa.
So, it remains to be seen how this development will progress and what features it will have, and it will likely take at least a year or two of work before it matures enough to be as comprehensive as Adobe's offering.
Aside from Creative Cloud syncing, the biggest feature that has kept me with Lightroom has been its extensive masking features, which have evolved over the years to be more and more useful and sophisticated.
If you don't know what masking refers to, it's a very powerful tool, enabling you to easily make selective, non-destructive adjustments to just certain parts of an image.
Gradient, Radial, and Brush
The first incarnation of masking in Lightroom were the linear and radial gradient tools, as well as a brush. These tools enable you to make a very quick and easy adjustment to a portion of an image. For instance, if the ground in front of the subject or the sky above them is a bit too bright for your liking, a quick drag of the gradient filter can tone these areas down (provided the sky isn't completely blown out). Similarly if you had to shoot something like a send-off with direct flash, resulting in the guests off to the sides who are closer to you being more brightly lit than the couple who are a bit further back (thanks to the inverse square law), a combination of a couple of gradients on each side to darken those guests, along with a radial filter over the couple to brighten them up, is a very quick and effective way to make a dramatic improvement to this kind of image, and the adjustments can be batch-applied to others in that same series. The Brush tool allows you to paint a mask on a more oddly shaped area.
You can not only do exposure adjustments in this manner, but you can also selectively shift the white balance, which provides a way to compensate for pesky mixed-lighting situations by making certain parts of an image warmer or cooler, and most other parameters (sharpness, noise reduction, clarity, saturation, etc.) can be applied selectively in this manner as well.
Of course, you could always accomplish these things in Photoshop, but it would take a lot longer, and the edits would have to be performed each image individually. And although this might be fine to do occasionally, it's simply not practical to employ these methods on a larger number of the wedding's images. Lightroom's gradient and radial tools, once you get the hang of using them, serve as a very quick and easy way to improve your images without totally killing your workflow efficiency.
Capture One did finally add similar local adjustment tools a number of years later, but in early 2022 Adobe raised the bar substantially with its new masking system, which featured AI masking to automatically select the subject or the sky.
Adobe's AI new masking system is truly a game-changer. While the radial and linear gradient filters and the local adjustment brush served as an easy means for you to brighten or darken (or apply other adjustments) to an area of an image, the downside is that these areas would not be precisely defined, so there would be some "spillover" (maybe you just want to brighten the couple, but the radial filter will also brighten some of the background behind them). In most cases, this is fine, but occasionally there is a need to be more precise. There were various ways to be more specific with defining exactly what area to apply selective adjustments to, but these would take more time to accomplish.
AI masking fixes this. With one simple click, your subject is detected and selected, and you are free to apply adjustments just to them. And it works remarkably (I'd say shockingly) well. If desired, with another click, the selection can be inverted (so that everything except the subject is selected). And if needed, the initial AI-created mask can be quickly and easily added to or subjected from in several different manners (such as with the gradient, radial, or brush tools), with these changes to the mask being non-destructive so that you can go back and adjust them later.
Ideally, every image a photographer captures would be lit and exposed perfectly straight out of the camera and require no such adjustments. But if there's one single defining characteristic of wedding photography, it would be the fast-moving, dynamic nature of the environment we shoot in. We generally don't have the luxury of time to set up perfect lighting for every shot.
Sure, I'd love for the shot of my couple recessing down the aisle to be flawless upon capture, but the reality is that they are going to be lit by a combination of the artificial light inside the church (which is going to be much brighter on the altar behind them), perhaps using a mismatched combination of bulbs of different color temperatures, some daylight coming in through the windows and the main door of the church (if its opened at the end of the ceremony), and perhaps my flash (which, if I use it, I'll have to decide on whether to shoot with normal daylight balance, or gelled to match the warmer artificial light of the church if that is indeed the case).
The end result, straight out of the camera, is likely going to be a shot with the background too bright and yellow, and the couple too dark and blue. There are various methods that can be employed to improve this image, but none are more effective and efficient than Lightroom's AI masking.
The biggest flaw with this feature when it was originally introduced was that it was cumbersome to make adjustments of this kind to one image and sync those changes to other images... after syncing or pasting settings that included an AI mask, you had to click a button on each of those images, one by one, to recalculate the mask. However, in the June 2022 update this was remedied, with AI masks now automatically being recalculated when image settings are synced or pasted.
The Adobe team continues to improve AI masking. The original Select Subject mask creation command, which usually does a great job at selecting the primary subject(s) in the photo, whether one person or multiple, has been supplemented with the addition of being able to select individual people. When you open the Masking tool, Lightroom Classic will detect as many of the people in the image as it can see, and give you the option of just selecting certain individuals (one or multiple). But wait, that's not all! Once you select a person, you also have the option of drilling down to even more levels of detail if you'd like, for instance only selecting that person's hair, eyes, teeth, face skin, body skin, etc. Want to apply an adjustment only to their lips? Or specifically to the eye sclera (the white portion of the eyes)? Or just their eyebrows? You can do that.
Of course, going into that much specificity for normal wedding images may not be practical or necessary, but it's still nice to have these options when needed.
For instance, here’s a real-world example of how Adobe’s AI masking would have saved me some time when editing a wedding from 2021. The bridesmaids wore dresses that were different in style but were all light grayish color with just a hint of lavender. However, one dress in particular was apparently made with a different type of fabric that responded to light differently than the others.. Although it fairly closely matched the color of the other dresses when photographed in daylight, when photographed with flash (with an umbrella during the group shots, and bounced during the reception), the color was dramatically different (darker and greener).
The only thing close to this that I’ve ever encountered is that white fabrics will, thanks to UV brighteners, often photograph with a blueish tint outdoors, but I’ve never seen a colored fabric change like this. To fix a few of these images for the couple’s album, I first had to manually paint a mask with the brush tool, before making the necessary color and tone adjustments to make it more closely match the other dresses.
However, a Lightroom update in Spring of 2023 added yet another level of detail its AI masking tool. In addition to being able to select a person’s face, eyes, hair, etc., you can now also instantly select just their clothing. This would have saved me considerable time!
In addition to being able to create specific masks as described above, Lightroom also facilitates AI masks being employed in presets. So, you can have a "whiten teeth" preset for example, or "enhance eyes", which generates the mask and applies the appropriate initial adjustments with one click.
My apologies to Capture One if it does indeed have this feature (as far as I know, it does not), but this enables you to have presets that have varying parameters depending on the ISO of a particular image. To put it simply, using noise reduction (probably the most commonly-used example of this feature), here's how it works.
You take a series of test shots at various ISOs (you can shoot at every single ISO setting available, or just do a few and let Lightroom interpolate the noise reduction settings to fill in the gaps between them), and make the appropriate adjustments to each (for example, minimal or no noise reduction for your ISO100 test image, and a heavier setting for your maximum ISO). You can then make an ISO-specific preset based on the settings of these images, and can define this preset as your import default.
As a result, immediately upon import, low ISO images will have no noise reduction (or just a small amount), while images shot at maximum ISO will have more noise reduction (and images at ISOs in between will have a proportionally scaled amount).
You may decide to subsequently increase or reduce the noise reduction on certain images as needed, though using an ISO-specific preset as your default provides a much better starting point that, if carefully configured, will be suitable for most of your photos.
GPU Used for Export
This is not really a Lightroom advantage over Capture One, more of a recent move to feature parity in this regard. Up until a Lightroom update in 2022, one source of envy of Capture One was that it leveraged the GPU when exporting images, while Lightroom pretty much just used the CPU. Because the M1 and M2 Apple Silicon chips in the MacBook Pro have powerful multi-core GPUs, I hated having all that power just sitting there mostly unused (though Lightroom did use the GPU for speeding up the display when making adjustments to images during editing).
But Lightroom now does use the GPU when exporting, which greatly speeds up the process of exporting final JPEGs.
What I'd Like from Lightroom in the Future
Though I am generally very satisfied with Lightroom, there's aways room for improvement... here's my wish list:
Out-Of-Focus / Blink Detection
One thing that I like about Capture One that Lightroom does not have is Focus Masking. This is a feature that, when turned on, overlays a color over the portions of the image that are in focus, which can aid in spotting potentially out-of-focus images without having to zoom in.
It also has a feature called Face Focus that automatically shows close-up views of the faces in a photo (via small cropped squares above the full image), enabling you to quickly evaluate focus and blinks without having to zoom in to each image.
I'm hopeful Lightroom not only adds these kinds of features in a future version, but improves on them. Specifically, leveraging the kind of processing that has enabled the amazing AI masking feature set, I could definitely see Adobe implementing a more powerful and comprehensive method of identifying and flagging images where focus may have been missed.
Rather than simply showing an overlay that indicates in-focus portions of the image or showing a close up of faces as Capture One does, Lightroom could instead intelligently analyze the image and determine what it thinks should be in focus, and if it's not, flag the shot for manual review. They could take this even further by also detecting blinks, which would be helpful for large group shots.
Indeed, in the surveys that Adobe periodically sends out to customers, there have been indications that this kind of functionality is planned or at least being considered.
There are third-party AI apps that can do this sort of analysis, but I've found them cumbersome to implement into my workflow.
More Selective (Masking) Adjustments
For the masking tool, I'd love for HSL/Color adjustments to be enabled. As it stands now, in addition to the normal variety of tone adjustments, you can make certain color adjustments, like shifting the color temperature and tint, reducing or increasing the saturation, or applying a color effect. What I'd like to be able to do, however, is selectively be able to apply the kinds of color adjustments that are present in the HSL/Color panel, specifically being able to adjust the hue, saturation, and luminance of the various color categories (reds, oranges, yellows, greens, etc.) just for certain parts of the image.
One use case for this would be if, for example, you have a white piece of clothing that photographed with a blue tint thanks to UV brighteners. Dragging down the blue saturation control in the HSL/Color panel can easily reduce this , but if you have legitimately blue elements in the scene (like flowers, decor, or actual blue clothing), this becomes less ideal of a fix because those will be desaturated too. Being able to make a hue/saturation adjustment not only a for a specific color, but for a specific color within a defined area of the image, would take care of this. The workaround is to use masking to select the clothing (or whatever element needs the adjustment), and use the regular Saturation slider to pull back the blue.
Improved AI Noise Reduction
Though Adobe released a new AI-powered noise reduction function in the Spring of 2023, the way it has been implemented makes it impractical for my workflow, as it creates duplicate DNG files of every processed image, rather than just having it be a non-destructive setting that is processed on the fly when showing the preview or exporting the image.
I realize that this is likely at least partially because the processing power needed for this new noise reduction is probably too high for it to be applied in real time and still maintain a satisfactory user experience. But I'm hopeful that Adobe will at least be able to, in a future version, implement this new noise reduction in a way that, even if it cannot be calculated quick enough to be handled the same manner as the current noise reduction, can be applied to batches of images without requiring the creation of a duplicate DNG file. I would even settle for AI noise reduction to simply be an option selected when exporting the edited images.
In the meantime, I'll just use the traditional noise reduction (which is fine for most of my images), and just deploy the AI noise reduction in specific instances where it's really needed.
Smarter Healing / Spot Removal
Lightroom now has a very effective Content-Aware Remove tool that works great for removing objects and spots in photos. But there is still room for improvement. In particular, I'd like for this tool to be able to sync to multiple images in a more useful way.
As it stands now, if you click to remove, say, a stray flower petal on the ground in the formal group shots, if you then sync that to subsequent images in that series, that instance of removal will be placed in exactly the same spot in those synced images. But because you're likely not shooting with the camera locked down to a tripod, and may be either adjusting your focal length or physically moving closer or further away from the groups, the flower petal is going to be in slightly different positions in each image, so syncing that removal is not effective.
My hope is that Adobe is able to implement a way to identify an object such as that, and instead of making synced instances of the Content-Aware Remove tool be positioned absolutely in the image frame, it would instead shift the removal instance based on where that particular object appears in each synced image.
A Few Other Miscellaneous Capture One Features I Like
I'm envious of several other Capture One features. One is called Speed Edit. An area where Lightroom has always lagged is the ability to assign keyboard shortcuts to make adjustments to images. It has some capabilities here, but I don't find them to be usable. As a result, I've always either used a separate app to enable these kinds of keyboard shortcuts, or have used a hardware control surface. Capture One's Speed Edit is my dream implementation of using the keyboard to edit.
The keyboard shortcut apps I've used in the past with Lightroom worked fine. I'd put my most commonly-used adjustments on pairs of keys within easy reach with my fingers on the home row of the keyboard; for instance, F would increase exposure, D would reduce it, while S would increase contrast and A would reduce it (white balance adjustments would be on the pairs of keys right above these, and some other adjustments would be similarly assigned to my right hand).
Capture One's method (which is built-in, no need for an additional app to add this functionality) is to instead have the user hold a particular key down (corresponding to the desired adjustment), and a temporary slider appears, which can be adjusted by scrolling the trackpad or mouse (or pushing the arrow keys up or down). Though I do like my Monogram Creative Console setup, I could definitely see myself flying through editing very efficiently with Capture One's Speed Edit feature.
Another Capture One feature (currently in beta) that I'd love to have is an automatic sensor dust removal tool. Sensor dust is typically not a big deal for me because I am most often shooting at wide apertures, which, due to optical principles, makes dust typically unnoticeable for most images. But occasionally it'll still be a problem on these wide aperture images, and becomes more of an issue when I shoot stopped down (like for formal group shots).
I can remove it fairly quickly in Lightroom, but it requires clicking on each instance of dust. It would be nice to be able to have the app automatically recognize dust and remove it with just one click. A possible workaround in Lightroom that could be suitable (but I have not yet tried) would be to filter the view to only show one particular camera, use the Content Aware Remove tool with an appropriate sized brush, and click on each instance of dust in one image, then syncing that to the other images. However, this would have to be done early in the editing process, because if an image is cropped and/or rotated, the position of the synced Content Aware Remove clicks would be incorrect.