Monthly Archives: February 2016

A followup about Error 53

Today, Apple released at least one statement which describes Error 53 as a factory test which was never intended to be released to customers – apparently, while disabling Touch ID was intentional, bricking the phones was not.

The article giving this information is, and it and others have referenced Apple’s KB article Apple’s article makes no mention of error 53 being a factory test, nor of reimbursing out-of-warranty replacements.

I don’t quite know what to make of this. I’m certainly willing to believe that it was only intended as a factory test and that bricking was never intended; it seems far more in line with Apple’s general attitude. And despite the justifications I offered in my last post, I’m not surprised that it was possible to not brick the device while also maintaining reasonable security. I’m actually gratified to find that Apple had never intended such a blatantly awful UX. I have plenty of complaints about Apple’s UX in recent times, but nothing measured up to abruptly bricking devices without warning. It’s also extremely plausible that a measure such as that might have escaped QA, given that it’s an expected error when not seen in the wild.

At the same time, it took Apple quite some time to respond, to the point of a class action having been filed. Most likely that’s due to the sensitivity of the issue, especially in the wake of the recent well-publicized court order directing Apple to break a particular iPhone’s encryption. Certainly the existence of that issue only makes the timing more interesting.

I’m not trying to accuse Apple of anything here; I’m personally satisfied with how they’ve handled the Error 53 situation. While I favor “right to repair”, and strongly dislike the trend towards hardware that the customer doesn’t effectively own, security of a device carrying important data in the context of the infamous gullibility and technical inexperience of the majority of users is a knotty problem at best and Apple is walking a fine line with relatively few missteps (though the “few” here is a long, long way from zero). What I do wonder about is what more there is behind some of the decisions that were made, and the timing of those decisions. If nothing else, it’s a matter of curiosity.

A short rant about Error 53 and why it exists

So I went on a bit of a tear at some people I know when they were complaining about Apple’s implementation of Error 53, which (to the best of my understanding) bricks iPhones which have been detected as having a third-party repair performed on the Touch ID sensor. Here are the highlights, slightly edited for language.

EDIT: A number of people have asked why Apple didn’t disable just Apple Pay and leave the rest of the phone functional. Technically speaking, I can’t do more than guess at the details, but it’s my presumption that this is the only way they could prevent jailbreaks and other “the user will do any stupid thing rather than actually listen to security warnings” (the effect of user arrogance on security is a whole separate issue from user ignorance that I’m not going to get into) from getting around the error, which would have rendered it useless. If there was any workaround for the error, the protection would effectively not exist, and then all Apple’s done is made themselves the target of more “annoying popups” complaints. It’d actually be worse PR for them than Error 53 is now! Once again, I am 100% in agreement that the user experience is abysmal and could have been dealt with far better, even within these technical constraints. But it’s still my guess (and again, I do not speak from any position of actual knowledge whatsoever) that disabling just Apple Pay wasn’t a viable option.

And let’s not forget, the data that’s being guarded here is in the Secure Enclave. That means your fingerprints, which are biometrics you can’t (practically) change, and your financial data, which one typically suffers from exposure of even in the best case.

Here’s what gets Apple to do things like this: USERS ARE STUPID! Given the choice, users will do the wrong thing almost every time, especially with respect to security. It’s the same reason Windows Update is now mandatory in most Windows 10 setups despite the screaming about it!

Now granted, I do agree that error 53 should not cause an absolute brick, as it seems to. But I absolutely 100% believe a measure like it is absolutely reasonable.

Here’s the problem – Let’s say Apple doesn’t do this, and someone does break the system and steal a bunch of money. Who are users most likely to blame? Apple, of course, for making a weak system. ​Any one person might individually think to blame the malicious third-party, but I will tell you now it has been proven through harsh experience that the overwhelming majority of users will blame the manufacturer for not making the device more secure!

Apple can suffer the blame for being secure more than it can suffer the fallout from not being secure. Same is true of MS and Google.

I know just enough about how iPhones work to wonder if maybe bricking is literally all Apple can really ​do​. For all I know, if Apple lets the device boot ANY level of the OS, even with passcode security enabled, a compromised sensor could very well then have enough to work with to trick data out of the secure enclave/element (whichever it is!).

At this point it was suggested that Apple could add a slider on the Error 53 screen which warned the user that Apple was not responsible for the consequences if the user chose to continue. To which I said:


Because every single user will instantly slide the slider. And you’re back to “well Apple didn’t actually do anything”.

In fact, the malicious third-party will just say “you’ll get this warning after the repair, don’t worry about it” And ​legit third parties would have to the say the same! So you’re back to the problem of trust model.

You must predicate everything you do in the name of security on the presumption that users are hopelessly lacking in knowledge.

They ​WILL​ be socially engineered into giving up credentials.

They ​WILL​ be socially engineered into turning off security features that give them even a moment’s annoyance even just once.

They ​WILL often do these things without any need to be prodded into it.

They ​WILL follow arcane, complicated, meaningless-to-them instructions to disable some critical safety features just to get a happy kitty running around on the lock screen instead of a static wallpaper. Don’t think so? What do you think jailbreaking ​is​?

The only way to fix this is to deal with the ​FUNDAMENTAL​ failures of the entire model of tech. Tech is not designed for people who don’t understand it. It never has been, it still is not. That includes the iPhone and all things like it.

Look at a different field, like finance – credit card debt is companies designing an entire industry around the predication that users are stupid.

Look at, say, being an electrician. I personally don’t know more than the basics of electronics; I couldn’t tell a three-phase power line from a one-phase with an illustrated freaking diagram. BUT I DON’T HAVE TO, because the person who wired up my apartment didn’t leave all the wires hanging around outside the walls, and there’s insulation on my power cables!

Computers, right up to and including the iPhone and similar, are effectively designed with all the live wires hanging out.

So that’s basically my opinion. All of my opinions are very much specifically my own, they don’t represent those of anyone I have ever before, do now, or ever will work for. If they did, I’d probably be a lot more critical, because I’d have to worry more about looking biased. I’d be pointing out more forcefully how Apple has a lot of problems about listening to what users want, same for Microsoft.

But when you get down to it, none of it is a problem with any one company or piece of technology. Apple is just the latest scapegoat in a debate that has more to do with the fact that society as a whole has a broken trust model than anything about who owns what. Could Error 53 have been handled better? You better believe it could have. But it’s a relatively reasonable solution in an overly complicated world where you effectively can’t trust anyone to know what they’re doing.