Evaluating Photoshop Lightroom and ACDSee Pro Photo Manager

I tried out the trial versions of Adobe Photoshop Lightroom and ACDSee Pro Photo Manager recently.  I was particularly interested in seeing how they would work for a photography workflow, such as basic image adjustments to curves, sharpening, dodging and burning, fixing minor problems and cropping.  For more serious manipulation I can always use Photoshop or the GIMP, but I kind of like the non-destructive process on these two new products I’m trying out below.

Last time I used ACDSee there was no such thing as ACDSee Pro Photo Manager.  Of course, there was also no such thing as Photoshop Lightroom.

Problems with Photoshop Lightroom:

  • The user interface is annoying slow and unresponsive.  It’s not ridiculously bad, but it is jerky enought to annoy.  I can tolerate that all the calculations required to apply filters to an image take time and processing power, but any significant delay or slowness in simply expanding, collapsing, or resizing panes or windows is quite unnecessary.  Is it due to their use of non-native widgets (ie, skinning)?  Probably, though that doesn’t mean an interface needs to be slow – take recent versions of Firefox on Windows for example.
  • It lacks the ability to correct for barrel/pincushion distortion, and to do perspective correction.  I need to do barrel/pincushion distortion correction fairly often – for example on any photos containing buildings or straight lines.  This means it would be necessary to bring images into Photoshop a lot of the time for what should be a basic correction, even though more advanced corrections like primary colour adjustments and chromatic aberration correction, which I would probably need less often, are included.  Note that I couldn’t find this feature in ACDSee either, but based on price I had higher expectations of the Adobe product.
  • The noise removal was not too useful to me.  On a scale of 0 to 100, where 0 is no noise removal and 100 is maximum, 0 is not enough and even 1 is too much – especially for chroma.  There is no setting in between 0 and 1.  Then again, noise removal was not too good on ACDSee either, but in a different way.
  • Using Lightroom to simply browse photos on the hard drive was not very intuitive to me.  I usually prefer browsing my existing file heirarchy, and there seemed to be no way to do that – I had to ‘import’ images into ‘albums’ or similar, where albums didn’t necessarily correspond to their locations on their hard drive.  It’s an extra layer of confusion and it means I can’t be sure that I’m grabbing the right images if I go back into an Explorer window and just drag and drop.

Problems with ACDSee:

  • In any other colour mode than sRGB, everything seems about ten times as slow.  Instead of dragging a slider and seeing the colour on the image change as the slider is dragged, now you start dragging the slider and wonder why the program seems to have stopped responding for ages.  Then a few seconds after you’ve let go of the slider and are starting to click randomly on the screen to see if anything you are doing is having any effect, the colour finally changes in the preview.  This means that practically, it’s not possible to do any adjustments with a colour space other than sRGB loaded.  No problem, that’s fine – except that it’s a bit of a hassle.  If you forget and leave it loaded, and make further adjustments, it goes all slow again.  It’s also slightly disappointing that a piece of software for professionals would assume its users will all be using sRGB.  Notwithstanding the whole sRGB vs colour management debate, there are a lot of photographers that do value wider gamut colour spaces.
  • The sharpening feature is decent, but not as good as in Lightroom.  The sharpening radius is only selectable as a whole number of pixels, so sharpening using a radius of 0.6 is not possible.  Furthermore, there is a ‘threshold’ for sharpening detail, but it represents a sudden cutoff – unlike Lightroom’s ‘detail’ slider which allows a smooth transition between the sharpening applied to high contrast edges vs smoother surfaces.  As a result, sharpening an image with lots of grain cannot look nearly as good.

Of the two, it is still hard to choose.  Lightroom has better sharpening and smoothing so for quality it would win for me, but I prefer the  file selection approach of ACDSee.  Then again, the ACDSee product is not really usable when doing adjustments using colour spaces other than sRGB; while I don’t expect to be doing this often, I might want to eventually.  And yet, the ACDSee product is still cheaper.

Looking at the LGPL license

The Lesser General Public License (LGPL) is a software license that is based on the GPL, but is more permissive.

I have seen the GPL software license described as a cancer before,  and this analogy does indeed hold, even though it misses the point.  If you want to borrow any code released as “GPL” for use in your own source code, any of your own code that the GPL is mixed with will also need to become GPL if ever you plan on distributing software.  However, this rule of the GPL is actually a very good thing, especially for hard working open source developers, because it ensures that their work can never be exploited by companies wishing to make a buck, unless those companies give their contributions to the code back under the same license, effectively becoming part of the same open source development effort.

The LGPL is mostly the same as the GPL except that it releases some of these restrictions.  I am not a law expert, but I have come to the following understanding.  In particular, it allows for code to be combined in the same project with code from any other license including proprietary licenses, under certain conditions.  If the whole thing, including the LGPL code as well as the other code, is distributed together, then it becomes known as a ‘combined work’ under the terminology of the LGPL.  The distribution must abide by the following (a complete list exists in the text of the license under ‘Combined Works’):

  • There must be some sort of clear separation between the LGPL code and the other code.  In particular, it must be possible for the recipient to modify the LGPL code or even completely replace it with other code, such as a modified version or later version of the libary.  Therefore, if it is a compiled program, then the LGPL code must either be dynamically linked (like, a separate DLL or shared file) such that it could easily be substituted for a similar library and still be interoperable; or, if it is statically linked, the minimum required source files and/or object files must be provided, to allow recompiling with an alternative library.  The non-LGPL portion may not contain any part of the LGPL code except for very simple header files.
  • It must be clearly pointed out which part of the code is LGPL covered, including its original copyright notice and the text of the LGPL (including the GPL on which it is based).
  • If the combined software displays copyright notices during the course of running, then the copyright notice for the LGPL covered portion must also appear here, along with a link to the LGPL and GPL.
  • In some cases, you would need to provide installation information detailing how to use a modified version of the LGPLd code in the combined application.

The above is just a list of stuff that you have to do if you are going to redistribute some LGPL code as part of your non-LGPL application.  However, the text of the LGPL doesn’t really explain the benefits, or what opportunities it opens up to you.  If you are distributing a combined work which consists of both someone else’s LGPL code, and other code that is under a different license, you benefit from:

  • You don’t need to release source code for the rest of your application (ie, the non-LGPL part).  The only exception to this being as described above – if it’s all statically linked then you’d need to provide just enough code (and/or object files) to be able re-link it with an alternative or modified version of the LGPL code.  If you are linking dynamically and interacting over a normal API, you don’t need to worry about that.
  • You don’t need to release the rest of your application under the GPL.  You can use any license you want, including more restrictive proprietary licenses, provided that when you do distribute it you follow the rules.
  • Unlike GPL version 3, which prohibits using the code if you are implementing copy protection or DRM software, you can use LGPL version 3 licensed code in an application which includes copy protection or DRM.

So, LGPL is a fairly permissive license in comparison to the GPL, which cannot be ‘combined’ with other proprietary licenses in this way.  So, what can’t you do with LGPL?  Well, apart from relaxed rules such as those already discussed, LGPL still contains all of the other restrictions of the GPL.  I won’t go into all of the detail of the GPL here, but here are some of the restrictions that you still retain when using the LGPL, that you may not retain with an even more permissive license (such as a BSD or X11 license):

  • Anybody wishing to modify and redistribute LGPL code must also make the source code of the LGPL code, including their modifications, available and licensed under the LGPL.  Therefore, the open source community and the original author can still benefit if a commercial enterprise contributes improvements to the LGPL code.
  • All existing copyright notices in the LGPL code, as well as the GPL and LGPL licenses themselves, must be prominently displayed in any redistribution.  In addition to this, any application using the LGPL code must display such copyright notices and refer to the license whereever they display their own copyright notices.

Something else of note is that whether you are building software that relies on either GPL or LGPL code, if you distribute that software without any of the GPL or LGPL included (ie, the user needs to download and install it themselves),  then it’s not considered a ‘combined work’ and you don’t need to worry about complying with any terms.  However, if you include even so much as a header file from the GPL or LGPL code in your software, then you’ll be bound by the license terms of the GPL or LGPL.

Why CAPTCHA are not a security measure

CAPTCHA, those little groups of distorted letters or numbers we are being asked to recognise and type into forms, are becoming more and more common on the web, and at the same time, frustratingly, more and more difficult to read, like visual word puzzles.  The general idea behind them is that by filling them in, you are proving that you are a human being, given that the task would be difficult for a computer program to carry out.  CAPTCHA aim to reduce the problem of spam, such as comment spam on blogs or forums, by limiting the rate at which one can send messages or sign up for new accounts enough so that it becomes undesirable for spammers.

Example CAPTCHA

There is a problem, however, when these are talked about as a ‘security measure’.  They are not a security measure, and this misconception is based on a flawed understanding of security: that humans can be trusted whereas computers – bots – cannot.  CAPTCHA are not a bouncer; they cannot deny entry to any human that looks like they are going to start trouble.  They can only let in all humans.  If your security strategy is something along the lines of ‘If it’s a human, we can trust them’, then you are going to have problems.

Another problem with CAPTCHAs is that they are relatively easy to exploit, and by this I mean, pass a large number of tests easily in order to spam more efficiently. While a single human can only look at a certain number of images per hour, a number of humans, with a lot of time on their hands, can look at thousands of them per hour.  If the internet has taught us anything, it’s that there are a lot of humans on the internet with a lot of time on their hands. So, while a CAPTCHA will slow one human down, it won’t slow hundreds of humans down.

The so-called ‘free porn exploit‘, a form of relay attack, takes advantage of this. CAPTCHA images are scraped from a site and shown on the attacker’s site. On the attacker’s site is a form instructing its visitors to identify the letters in the image, often by promising access to something such as free porn. All the attacking computer needs to do is drive a whole bunch of people to that form and then use all the resulting solutions to carry out the intended dirty work at the original site.

It doesn’t have to be porn, of course – that is just a popular way of illustrating this form of circumvention.  Any time a human wants something, or even is a little bit bored, you can ask them to fill out a form. Get free jokes in your inbox, fill out this form. If you get many humans working against a CAPTCHA it makes the CAPTCHA ineffective.

Technology for solving CAPTCHA automatically, without requiring any human intervention at all, is also evolving at a high rate.  Scripts that can solve known CAPTCHA variants are becoming available all the time, and in response, new CAPTCHA variants are emerging too, each one more difficult to read than the last.  The computer power required to recognise characters visually is trivial compared to, for example, the computer power required to crack a good password from its hash, or to break an encryption scheme based on encrypted data.  The CAPTCHA that are unbreakable today are only unbreakable by obscurity; they are constructed differently enough for previous ones that current computer programs don’t recognise the letters and numbers in them.

What are alternatives to CAPTCHA?

The alternative will vary depending on what you are using it for.  If you are using CAPTCHA for reducing spam on your blog, then they will probably continue to do so, though you may find yourself resorting to other options.  Bayesian or rule-based filtering, or a combination of both, are effective methods of reducing spam, and have the added benefit that they do not annoy the user or impede usability or accessibility like CAPTCHA would.

If you are using CAPTCHA as a security measure, you would need to ensure that you are only doing so based on a proper understanding of what kind of security they bring.  Certainly, they cannot do anything about keeping unauthorised or unwanted people out, as this is not what they are designed for.  They also have severe limitations in their ability to reduce spamming or flooding, due to them being relatively easy for a sufficiently organised attacker to bypass.

An earlier version of this article was published at SitePoint.com in November, 2005.

When tech companies team with repressive governments

New legislation may impose penalties for U.S. companies that help foreign governments to commit human rights abuses, according to this report at ZDnet.  It says that representatives from Google, Yahoo, Microsoft and Cisco were recently ‘drilled’ by congress for ‘several hours’ on their cooperation with the Chinese government.

In a high profile case, Yahoo has previously come under fire from its shareholders for providing the Chinese government with the identities of pro-democracy dissidents, leading to their ‘arrest’.

One of the bits I don’t like about human rights abuse is the part where people are arrested and taken from their homes and locked up for no good reason and with no fair trial.  I’m also not a fan of torture, or of seizing people’s homes without fair compensation.  Anything that ends in humans suffering.  One would have thought that helping a foreign government do this sort of thing to its citizens, either by providing intelligence or with funding, would be against the law – whether you’re a spy for the Chinese government or a technology company like Yahoo.

When human suffering is involved, providing information to foreign governments goes beyond being an ethical compromise and becomes a world problem.

The password problem

The first problem with passwords on the web is that passwords alone are not strong authentication.  The second problem is that people have too many of them to remember.

Some people will reuse the same password on several different services, leaving all those services vulnerable if their password is compromised through any one of them.  Other people use different passwords for several different services, but need to resort to writing passwords down or making heavy use of ‘forgotten password’ features because they are too hard to remember.

Online authentication

In the offline world, pretty good authentication can be achieved by combining a card with a PIN.  This is an implementation of two-factor authentication.  In short, this means that authentication is based not just on something a person has in their possession (like a key or card), something a person knows (like a PIN or password) or their own body (like a fingerprint or DNA), but on at least two of those three categories.  The principle behind this is that it is significantly more difficult for an attacker to steal your identity if they need to both obtain something you have, and find out something you know.  When your bank card is stolen, the thief cannot access your account unless they also know your PIN.  If your PIN is guessed, overheard or intercepted somehow, the snoop cannot access your account without your card.

Online, however, strong authentication is a lot more difficult, because authentication has to rely almost entirely on something you know.  This means that rather than just being one factor in authenticating you, your user name and password combination becomes the only factor.  It becomes a lot easier for someone to steal your identity, as they only need to intercept your password somehow.

When we sign up for online accounts, we are told to create passwords that are “strong”, and unique.  The general idea of “strong” here is hard to guess, but also helps with “hard for someone to see over your shoulder”.  However, a strong password still does not protect against a situation where someone bugs your computer, or your ISP’s computer, and sees your password as it is transmitted.  This is relatively easy to do – as easy as exploiting a bug on any software in your system or your ISP’s, or ISPs betraying your trust, etc.

In making passwords stronger, too, they are also made harder to remember.  This is a good thing to a point.  It means that someone who does overlook you typing your password is less likely to recall it or catch all the letters.  But after a point, being harder to remember really detracts from security, because users are more likely to write them down in order to remember them.  When the password is the only secret thing that can authenticate a person for an online account, having that password written down makes a less than ideal situation worse.  It means that the password now becomes vulnerable not only to eavesdropping, but also to physical theft.  Comparing this to two-factor authentication, we have gone in the opposite direction.  Not only is there only a single factor, but there are two types of vulnerabilities for this single factor, and an attacker can choose either.

Multiple accounts

The number of accounts we need to authenticate ourselves (prove our identity) for is growing.  It is not uncommon for someone to have a dozen different accounts or more for different online services, ranging in importance from online bank accounts and auction websites right down to simple blogs and discussion forums.

If we are to assume that people use different passwords for each, we are expecting too much for them all to be remembered.  It is common practice for people to just write them all down, but as stated that detracts from security.

The alternative for users is to re-use the same password for multiple accounts, but this is putting all their eggs in one basket.  If their one good password is compromised, then all these accounts are vulnerable.  If your password is stolen or intercepted, it may not even be your fault – a company hosting one of your accounts may have let it slip through negligence.

A reasonable person probably uses a combination of the above – using unique passwords on only those very important accounts such as their online banking, while re-using a common good password for everything else.

Technological solutions

OpenID is a distributed authentication mechanism which aims to let someone log in to accounts with several different companies, without exposing their password to those companies.  The password is sent only to the single OpenID provider, which authenticates the person and then signals to the company that the person has been authenticated according to that OpenID account.

This helps cut down the number of ways that a password could be intercepted.  However, it does not change the fact that if that one single password does get compromised, the attacker can gain (albeit temporary) access to all those accounts.  In fact, it could make it worse: upon gaining entry to your OpenID account, the attacker might be presented with a nice little list of all approved providers – IE a list of where they can use this OpenID account.  This would depend on the OpenID provider.

Educating users

Educating users is something that does not work as well in practice as it does in intention.  Most users already know that they should not write down their passwords, and they should be as strong and unique as possible.  However, they will continue to behave in the way that is most convenient need to them, creating easier to remember passwords, re-using them, or writing them down, just to make it easier to deal with so many.