Tag Archives: rant

A short rant about Error 53 and why it exists

So I went on a bit of a tear at some people I know when they were complaining about Apple’s implementation of Error 53, which (to the best of my understanding) bricks iPhones which have been detected as having a third-party repair performed on the Touch ID sensor. Here are the highlights, slightly edited for language.

EDIT: A number of people have asked why Apple didn’t disable just Apple Pay and leave the rest of the phone functional. Technically speaking, I can’t do more than guess at the details, but it’s my presumption that this is the only way they could prevent jailbreaks and other “the user will do any stupid thing rather than actually listen to security warnings” (the effect of user arrogance on security is a whole separate issue from user ignorance that I’m not going to get into) from getting around the error, which would have rendered it useless. If there was any workaround for the error, the protection would effectively not exist, and then all Apple’s done is made themselves the target of more “annoying popups” complaints. It’d actually be worse PR for them than Error 53 is now! Once again, I am 100% in agreement that the user experience is abysmal and could have been dealt with far better, even within these technical constraints. But it’s still my guess (and again, I do not speak from any position of actual knowledge whatsoever) that disabling just Apple Pay wasn’t a viable option.

And let’s not forget, the data that’s being guarded here is in the Secure Enclave. That means your fingerprints, which are biometrics you can’t (practically) change, and your financial data, which one typically suffers from exposure of even in the best case.


Here’s what gets Apple to do things like this: USERS ARE STUPID! Given the choice, users will do the wrong thing almost every time, especially with respect to security. It’s the same reason Windows Update is now mandatory in most Windows 10 setups despite the screaming about it!

Now granted, I do agree that error 53 should not cause an absolute brick, as it seems to. But I absolutely 100% believe a measure like it is absolutely reasonable.

Here’s the problem – Let’s say Apple doesn’t do this, and someone does break the system and steal a bunch of money. Who are users most likely to blame? Apple, of course, for making a weak system. ​Any one person might individually think to blame the malicious third-party, but I will tell you now it has been proven through harsh experience that the overwhelming majority of users will blame the manufacturer for not making the device more secure!

Apple can suffer the blame for being secure more than it can suffer the fallout from not being secure. Same is true of MS and Google.

I know just enough about how iPhones work to wonder if maybe bricking is literally all Apple can really ​do​. For all I know, if Apple lets the device boot ANY level of the OS, even with passcode security enabled, a compromised sensor could very well then have enough to work with to trick data out of the secure enclave/element (whichever it is!).

At this point it was suggested that Apple could add a slider on the Error 53 screen which warned the user that Apple was not responsible for the consequences if the user chose to continue. To which I said:

No.

Because every single user will instantly slide the slider. And you’re back to “well Apple didn’t actually do anything”.

In fact, the malicious third-party will just say “you’ll get this warning after the repair, don’t worry about it” And ​legit third parties would have to the say the same! So you’re back to the problem of trust model.

You must predicate everything you do in the name of security on the presumption that users are hopelessly lacking in knowledge.

They ​WILL​ be socially engineered into giving up credentials.

They ​WILL​ be socially engineered into turning off security features that give them even a moment’s annoyance even just once.

They ​WILL often do these things without any need to be prodded into it.

They ​WILL follow arcane, complicated, meaningless-to-them instructions to disable some critical safety features just to get a happy kitty running around on the lock screen instead of a static wallpaper. Don’t think so? What do you think jailbreaking ​is​?

The only way to fix this is to deal with the ​FUNDAMENTAL​ failures of the entire model of tech. Tech is not designed for people who don’t understand it. It never has been, it still is not. That includes the iPhone and all things like it.

Look at a different field, like finance – credit card debt is companies designing an entire industry around the predication that users are stupid.

Look at, say, being an electrician. I personally don’t know more than the basics of electronics; I couldn’t tell a three-phase power line from a one-phase with an illustrated freaking diagram. BUT I DON’T HAVE TO, because the person who wired up my apartment didn’t leave all the wires hanging around outside the walls, and there’s insulation on my power cables!

Computers, right up to and including the iPhone and similar, are effectively designed with all the live wires hanging out.


So that’s basically my opinion. All of my opinions are very much specifically my own, they don’t represent those of anyone I have ever before, do now, or ever will work for. If they did, I’d probably be a lot more critical, because I’d have to worry more about looking biased. I’d be pointing out more forcefully how Apple has a lot of problems about listening to what users want, same for Microsoft.

But when you get down to it, none of it is a problem with any one company or piece of technology. Apple is just the latest scapegoat in a debate that has more to do with the fact that society as a whole has a broken trust model than anything about who owns what. Could Error 53 have been handled better? You better believe it could have. But it’s a relatively reasonable solution in an overly complicated world where you effectively can’t trust anyone to know what they’re doing.

Getting rid of old certificates in Xcode

Xcode 4.x’s Organizer window has an annoying habit of not only keeping old certificates (whether expired, revoked, duplicated, or otherwise redundant) around, but also restoring them every time you try to delete them. There’s no interface in Xcode for removing these extraneous identities and nothing sees to work for getting rid of them. Here’s what I originally tried, more or less in order and starting from the beginning with another step added every time:

  1. Deleting certs and keys from Keychain Access
  2. Deleting certs and keys with the security command
  3. Restarting Xcode
  4. Restarting computer
  5. Ditching ~/Library/Caches/*{Xcode,Developer}*
  6. Ditching ~/Library/Preferences/*{Xcode,Developer}*
  7. Ditching ~/Library/Developer (while saving only my keybindings and font/color settings)
  8. Removing all archives from the Organizer
  9. Grepping my entire home directory for the certificate name (four hours taken)
  10. Grepping my entire computer for the certificate name (2.5 days taken as I couldn’t figure out a command that excluded sending it down several recursive directories that led back to / – I could’ve, but I was lazy).

I finally ferreted out the final hiding place of Xcode’s ridiculous cache of certificates in /var/folders/<some random alphanumeric characters here>/com.apple.{dt.Xcode,Xcode.501,Developer} (or something very similar). When I deleted that and all of the other things mentioned above, the offending/offensive identities finally vanished.

tl;dr: To make sure you’ve really killed Xcode’s cache, make sure you clear out the area Apple deliberately made hard to find and set as $TMPDIR as a so-called security measure.

Rant: Security Questions Are Stupid

We’ve all heard this bit before, especially the avid readers of Bruce Schneier’s security blog, but after seeing the security questions available on a new account I created today, I just had to do my own rant.

Security questions are considered by some to be a form of “two-factor authentication”. They’re no such thing. If used to further secure login, they’re just an extra password which is almost guaranteed to be much more guessable than your usual password. If used to recover a lost password, they function to replace your password with something almost certainly less secure.

Some suggest giving nonsense answers to security questions for this reason. Of course, then you’re back where you started: You’ll never remember your answers. That’s when you could’ve remembered your answers if you answered them honestly, which is often just as impossible. And now we’re back to writing it down on paper, which negates the entire point.

Yet these stupid things are required on a majority of secure sites now. Can security auditors please stop trying to please their clients and tell them the truth about how security questions just make things worse?

The questions which prompted this rant:

  • “What was your favorite color in elementary school?” – Now, let’s assume I remember that time of my life in the first place. At which point in elementary school? Let’s say I just pick one, and let’s even more fantastically say I somehow stick to remembering which one. Most children will have said one of the colors of the rainbow. Say it with me now… “Dictionary attack”!
  • “What is the nickname of your youngest sibling?” – Suppose I don’t have any siblings. Suppose I am the youngest sibling. Suppose my youngest sibling doesn’t have a nickname. And even aside from all this, names suffer from relatively low entropy, though admittedly not as low as colors.
  • “What was your first job?” – Have I ever had a job? Am I young enough that I remember exactly which thing I did first? Do I count doing chores as a child? Do I count shoveling snow for my neighbors? Do I count internships? How do you define a “job”?
  • “What breed of dog was your first pet?” – I’ve never had a dog as a pet in my life. And that’s even after the assumption that I have a pet at all. If I did, was the first one a dog, and did I only get one dog at that time? By the way, the entropy of dog breeds is even lower than that of colors when you include all colors.
  • “What is the nickname of your oldest sibling?” – See youngest sibling.
  • “What is the name of your first pet?” – Again, suppose I have no pets. Suppose my “first” pet was one of a group. Suppose I picked an arbitrary one out of a group. Also, low entropy again.
  • “Who was your childhood hero?” – What constitutes a hero? Suppose there wasn’t someone I looked up to in childhood? Suppose there was more than one? Suppose I just don’t remember? And the entropy of a hero’s name is likely to be rather lower, on average, than that of a regular name.
  • “What was the model of your first car?” – Where do I even begin here? Did I ever own a car? Am I even old enough to drive? Do I remember its model? Do car models have any kind of entropy at all?
  • “What was the name of your earliest childhood friend?” – I had lots of friends as a child. Didn’t everyone? Suppose, more morosely, that I had none. Am I going to know which one was the earliest? And yet again, the low entropy of names.

Now, I grant, most of these are pretty silly nits. They don’t have to be accurate answers, just ones to which I can remember the answers consistently. Unfortunately, the more likely I am to remember the answers, the less likely they are to be remotely secure passwords.

Password strength doesn’t count when the answers are only one word long and chosen from a limited pool, people.

Objective-C and the Web

Earlier today, courtesy of @GlennChiuDev, I was reading Kevin Lawler’s informal tech note about using Objective-C to power the Web. I found myself agreeing with quite a lot of it.

I then had the chance to read @heathbordersresponse to the original post, which I realized I was also agreeing with in considerable measure.

So here’s my response to both. I’ve assumed that readers have at least skimmed both the original post and the response so that I don’t have to do what Heath did and duplicate everything they said here :).

Kevin makes the point that Apple has hugely improved Objective-C in recent times, especially with the most recent releases of OS X and iOS. Heath objects that while Objective-C has certainly improved, it’s still a strict superset of C and comes with all of C’s well-known and discussed-to-death problems.
While I agree with every one of Heath’s list of issues with Objective-C, my thought is that everyone works best in whatever works best for them. Some people (myself included) are going to be more comfortable in a bare-metal-with-extensions language like Objective-C, while others are never going to enjoy it in comparison to Java. It’s a personal thing, and I’d argue that a programmer who doesn’t like Java, for whatever reason, will never save time in it no matter how many conveniences it provides over Objective-C. Heck, I get plenty of scripting done in PHP even though I agree that Python and even Ruby have enormous language advantages and that PHP has severe community and design issues, because I’m extremely familiar with it.

Kevin goes on to say that Java was meant to be a write-once run-anywhere language but failed at it, and Heath counters by pointing out that Java does indeed do this.
This isn’t really a simple argument in either direction. Java was indeed intended as write-once run-anywhere, but while Java CLIs and servers do fulfill this promise for the most part, I think Kevin was thinking (as I did at first) of Java GUIs. To a one, I have never met a Java GUI I like, on any platform. Java apps look and act horribly non-native on OS X, are slow (and odd-looking, if less so) on Windows, are just as clunky as everything else on X11 (my personal opinion of all the X windowing toolkits is that they all stink), and as for Android… well, I don’t like Droid, and even that aside, Java working “right” on one platform is the exact opposite of the promise. In that respect it might as well not be any different from Objective-C in its platform dependence.

I do have to agree with Heath and disagree with Kevin regarding writing portable C/C++ being easy. Even if you use only POSIX APIs exclusively, which will severely limit your functionality in the general case, this is a nightmarish undertaking. Even if you restrict yourself only to Linux variants, nevermind trying to work with all the other UNIXen, OS X, and Windows, it’s all but impossible without a complex system like autoconf (which is another entire rant about horrible garbage in the making).

With regards to the JVM, I have to agree with Heath again: The JVM is absolutely a useful UNIX system layer, and JIT does make it a lot less slow than Java used to be. Similarly with garbage collection; GC is an abomination in C and Objective-C, but that’s because the design of those languages precludes the collector having full knowledge of what is and isn’t a live object without very restrictive constraints. In a fully virtualized language like Java or C#, properly implemented garbage collection is absolutely a useful technology.

I can’t say much about Java re: Oracle, since I don’t know much of what really happened there, but just from reading the respective posts, I have to say Heath makes a more persuasive argument than Kevin’s declarative statements.

Kevin then goes on to say that object-oriented programming is a win over functional programming, and Heath objects, saying that there are a great many people who disagree. In this case, while I personally agree with Kevin in my own work, this is another area where personal preference and training will trump blanket statements every time.

Kevin also talks quite a bit about Automatic Reference Counting (ARC); Heath didn’t respond to this section. I find ARC an absolute divine gift in Objective-C, but all ARC does is bring the syntax of GC to a non-GC environment, and in an incomplete fashion: The developer must still be careful to avoid retain cycles with weak references and explicit nil-ing of strong references.

Kevin goes on to talk about Apple’s failed WebObjects project. He gives some reasons and thoughts about Apple moving Objective-C to cross-platform deployment. He seems to be unaware of GNUStep, ObjFW, and other similar projects, but setting that aside, I absolutely agree that Apple bringing the full Objective-C runtime, including most if not all of Foundation, to a wider UNIX base would be spectacular. Reviving and expanding the former OpenDarwin project would also be awesome, in my opinion. In this, I’m completely on Kevin’s side; this should happen and he lists several good reasons for Apple to do it.

Now Kevin goes on to say what is no doubt the most controversial thing in his entire post: “Xcode is an excellent IDE, with tolerably good git support.”

Like Heath, I must say: This. Is. Patently. False.

Xcode 3 was a tolerably good IDE, absolutely. Not modern or fully-featured by any measure, but fairly decent. Xcode 4, however, is a crock of <censored>. I’ll let Heath’s response speak for me on this for the most part, but I’d like to add that Xcode’s git support is also absolutely abysmal. Worst of all, there’s no way to shut it off, even if you never told Xcode that the project had a git repo.

So to summarize, what Kevin seems to have posted is a rant about his issues with functional languages and Java, and his love for Objective-C, without a lot of facts to back it up. I’m strongly in agreement with his feelings on most points, and I totally agree that Objective-C would be an awesome language for Web programming, but I suspect Apple hasn’t gotten into the field exactly because Java isn’t the terrible beast he made it out to be. This is a shame, to be sure.

As a footnote to those who still follow this blog hoping for a post on this subject: Missions of the Reliant isn’t dead! I’ve been pretty busy for a long time, but I will find time to work on it!

A potential direction for Objective-C

As with my Xcode rant, this is a modified version of a mail I sent to an Apple mailing list, in this case objc-language:

I agree that adding an === operator to Objective-C would be, at the very best, questionable at this late date, if indeed it was ever anything else. My experience in PHP has been that the two are very often confused and misused, and I can’t see that being any different in Objective-C.

I do strongly support the concept of @== and @!= operators that equate to [object isEqual:] and its negation, as well as the associated @<, @> etc. operators. If one of the objects in question can be determined not to implement compare:, throw a compile error. If either operand is typed id, throw a runtime exception, exactly as [(id)[[NSObject alloc] init] compare:] would do now. In short, make the operators mere syntactic sugar, just like dot-syntax and the collection literals, rather than trying to toy with the runtime as @"" does.

This is not “arbitrary” operator overloading, as with C++. That would be an absolutely abhorrent idea, IMO. Define new, unambiguous operators and make it very clear exactly what happens when they’re used. Don’t make it possible to change that behavior from affected code. Add a compiler option or ten so you can do -fobjc-compare-selector='myCompare:' or what have you (as with the one for the class of constant strings), but that’s all.

I understand people who complain that Objective-C is getting “too big”, but the fact that the collection literals were implemented (yay!) makes it clear, as far as I’m concerned, that it’s understood that the language is just too verbose (and difficult to read) as it stands. Adding a new set of clear, intuitive operators would not detract from its usability. People who don’t know enough to write @== instead of == were already going to write == instead of isEqual: anyway, as a rule.

My Xcode rant

It seems everyone who develops for OS X or iOS these days has their own rant about the problems with Apple’s development environment, Xcode. Well, here’s mine, excerpted from a message I sent to the xcode-users mailing list.

Xcode 3 was getting pretty good for awhile, but then Xcode 4 was released, a massive backwards step in functionality which has only been getting worse with its point releases. I have suffered, shockingly, very few of the crashes and data loss bugs which other people have been plagued with, but I have plenty of gripes just the same.

Xcode 4’s integrated layout may look good on paper, and even work better for some people, but for others it’s a hopeless struggle to manage screen space and get a consistent workflow going. Xcode 3’s ability to pop open and then close the build progress window was delightful; with Xcode 4 I just get the build log in my editor pane without being able to see the code I’m working on. Ditto that the IB integration into Xcode; with a windowed layout that would have been tolerable, but as it is I spend considerable time just going back and forth between interface and code views to see what the heck I’m doing – and no, tearing off Xcode 4’s tabs doesn’t make it better, because that has a near-100% tendency to completely destroy my window position and layout settings. Xcode 4 took away the class hierarchy view. It took away the ability to compile one file at a time. The integrated debugger console is painful and takes away from code editor screen space. Workspaces are just plain broken and do not work as advertised. The configuration editor is a step up, but unfortunately the “scheme” concept is a two steps down. Switching between Debug and Release should not require chugging through three settings panels to find the right switch. And don’t talk to me about the inability to shut off Git integration on a per-project basis (or, for that matter, at all). Why can’t I enable Guard Malloc or change the debugger used for unit tests? Why am I sacrificing valuable screen space (and I say that having a 27″ screen, fully aware that people are doing dev with Xcode 4 on 11″ Macbook Airs) for an iTunes-like status display when Xcode 3’s status bar was just as useful?

And Xcode 4 itself is, as a whole, sluggish in every respect. Operations of all kinds, from editing text to creating connections in IB to switching between code files, which were perceptually instantaneous in Xcode 3 take visible time in 4. Even those 200 milliseconds here and there add up to an overall feeling that I’m spending more time waiting for my development environment to catch up with my thinking than I am actually writing code. Yes, it’s very nice that the debugger console and utility panels slide neatly in and out with smooth animation, but I’m a developer; Apple doesn’t have to market eyecandy to me. I’d strongly prefer instant response. And all I get for all this trouble is ridiculously bloated memory usage forcing a restart of the program every several hours of serious work.

Before anyone asks, yes, I’ve filed several Radars. All have been closed as duplicates (which means I’ll never hear anything about them again) or ignored (same result). The impression I get from Apple is that they think that they have enough people who build their livelihoods on the iOS ecosystem that they don’t have to put any effort into improving the tools for those who give a darn about a time when writing code wasn’t an exercise in stockpiling $20,000 for a tripped out Mac Pro whose stats can compensate for Xcode’s flaws.

Some other good examples of the problems people have with Xcode:

The dangers of games

As a programmer, I have the dubious pleasure of enjoying overcomplicated, highly technical games such as EVE Online. For those who don’t know, EVE is an MMORPG that functions essentially on the opposite premise from World of Warcraft. Pretty much nothing is done for you in EVE. There’s a million ways to screw up and nothing you can do once that’s happened. It’s rather like real life in that way. Despite its poorly-done Python UI and downright pathetic Mac port (it’s the cider wrapper layer on top of wine emulation), I enjoy the game, primarily because it exposes so much of the “nitty-gritty” details of how its universe works. A player has control over very detailed numbers and data relating to the functioning of their spaceships and even their bodies. Often it’s too much data; it’s very easy to forget one tiny thing and lose millions of ISK (in-game money) and a great deal of time because of it. Neglecting to bookmark a wormhole exit comes to mind. EVE also does nothing for you. To install implants or activate jump clones, you have to manually pause your skill training queue, for example, even though this is something the game could very easily do for you, and there’s no apparent reason to make the player click the extra four buttons.

In any case, the thought behind this whole bit is, games are addictive. This is not a new discovery, for the world or for me, and I don’t expect anyone to be astonished by the revelation. For people such as me, who fall in love very easily with inane technical details and exacting numbers and gated progression (the need to finish task X before being able to learn the details of the task Y that follows), EVE is particularly so. It’s easy to say “I’ll just do one mission and then get to work,” and in a game like World of Warcraft where quests or even group dungeons are typically short these days (vanilla WoW notwithstanding), that would mean an hour of playing a game and then several hours of productive time. Setting aside the question of the “well just one more” syndrome, which is another problem altogether, the same comment made about EVE usually involves suddenly realizing I’ve spent six hours I meant to use for coding just finishing the one task! It always takes longer to blow up NPC ships than the mission description suggests (even using the Cliff Notes available online). Then there’s travel time between areas of a complex to consider, especially in a slow ship like most of the more powerful ones, and time spent salvaging wrecks (an extremely profitable activity well worth the effort if you have the time to spend, especially on more difficult missions), and then there’s organizing and selling/using whatever you gained from the mission and the salvage.

EVE unfortunately has the problem that for some play styles (including mine), play consists of paying intent attention to the same thing happening over and over for an hour or three, most of that time spent with no user input (and what input there is is also repetitive). Taking one’s attention off for a moment lends itself to finding the entire effort wasted. This would be a spectacular thing for some forms of autistic, but I’m not one of them! Oh well. I still like the game, because there’s a very real sense of accomplishment to completing various tasks.

The upshot of it all is that the existence of such games tends to sap the time I’d otherwise spend making progress. Yet, if asked if I’d rather the game be taken away, I have to say no, because I still need the distraction. What I want, really, is more control over the length of the distraction. “Just do it” doesn’t work for everyone, people!

I would recommend EVE Online to compulsives and the technically minded. I would not recommend it for those who don’t have the patience to wait before being able to explore facets of the game. Some of the higher-end stuff takes literally months to gain the skills for.

This post didn’t really have a conclusion, or a solid point. I just kinda felt like getting all that out. :-)

Lua + iPhone = mess

And now one of the rare not-Missions-related posts.

I found myself with the need to run Lua code under iOS. Yes, this is legal according to the current Apple Developer Agreement. Who knew the journey I’d undertake in the process.

Originally, I got it running by just building the Lua source files into a static library and linking it in, then using the C API as usual. Worked like a ruddy charm. But then I decided to get clever. “Wouldn’t it be great,” I thought, “if I could run the bytecode compiler on the source and make them into unreadable bytecode objects?” So originally, I tried piping the source files through luac and running them otherwise as usual. The story of how I got Xcode to automate this process without a damned Run Script build phase is another one entirely.

Bad header in precompiled chunk.

Docs say, “The binary files created by luac are portable only among architectures with the same word size and byte order.” Fine. x86_64 and armv[67] are both little-endian, but sizeof(long) on x86_64 is twice what it is on armv7. Duh! Running Stuff on Apple Systems 101. Universal binary, here I come! I built lua 32-bit and lipo’d myself a nice universal binary of luac, ran it via /usr/bin/arch -arch i386.

Bad header in precompiled chunk.

What? Byte order and word size are correct. So I delved into the Lua source code. lundump.c, lines 214-226. Precompiled chunks are checked for version, format, endianness, size of int, size of size_t, size of Lua opcodes, size of the lua_Number type, and integralness of lua_Number. All of which should have been correct- until I remembered that I’d changed the luaconf.h of the iOS binary’s Lua to make lua_Number a float. It’s a double by default.

Rebuild my universal binary of luac using the same luaconf.h the iOS project uses. Run it through yet again. Lo and behold, it worked that time. It doesn’t really feel right to me running i386-compiled bytecode on armv7, but since Lua doesn’t even remotely have support for cross-compilation and I don’t feel like jumping through the hoops of making a luac utility on the device without jailbreaking, it’s the best I can do.

I would be remiss if I didn’t remark also upon the journey of learning how to make Xcode automate this process for me. There was more than just running luac to be done. I wanted my Lua scripts run through the C preprocessor, to get the benefits of #include and #define. Easier said than done. The C preprocessor doesn’t recognize Lua comments, and while because comment stripping is part of preprocessing I could have used C comments instead, it would have meant changing a goodly bit of code, and messed with the syntax coloring. And more importantly, my sense of code aesthetics. It’s always bothered me that the C preprocessor can be both Turing-complete and damn near impossible to work with (an aptly-named Turing tarpit). So I wrote a pre-preprocessor (also in Lua, naturally) to strip the Lua comments out first. But then I had to parse and handle #include manually. Oh well. The real benefit of C preprocessing is the macros anyway. It was quite an interesting bit of work making Lua talk to clang; the built-in library for doing things is a little bit lacking. Anyway, the upshot was there were three steps to processing Lua files in the iOS project: preprocess, luac, copy to Resources.

I finally caved in to my sense of danger and went looking for the extremely scanty reverse-engineered documentation on Xcode plugins. It was pretty awful. The API looks to be quite a mess of inconsistently named attributes with strange side-effects. It took me two hours to hunt down the cause of a “empty string” exception as being the use of $(OutputPath) inside a cosmetic attribute (Rule name). It was hardly perfect even when I declared it done, and then I realized I had the architecture problem again. I had to run a different luac for x86_64 than for everything else. If i386 was the only other architecture, I could’ve just let it be done by a universal binary, but no, it had to cover armv[67] too. Ultimately it turned out a second plugin was necessary, lest Xcode be sent into a downward spiral of infinite recursion at launch. Ugh. Don’t talk to me about all the horrifying effects the tiniest typo could have on Xcode. I love the one in particular where the application was fully functional, except for builds and the Quit command. Uncaught Objective-C exceptions equals inconsistent program state, people. And it’s not just that the API is entirely undocumented. You get those sorts of weird behaviors even if you never install a single plugin or specification file; the Apple developers don’t always get it right either. The application is a horrid mess, and I find myself desperately hoping that Xcode 4 is a full rewrite. I can’t discuss anything about Xcode 4 due to NDA, of course.

As a side note to all this, compiling extension modules for Lua also turned out to be an unmitigated nusiance. It turns out, you see, that the extension authors out there tend not to run their code on Darwin-based systems, and so all the ludicrous quirks of dyld tend to hit a user of their code smack in the face. Finding out that I needed to pass -bundle -undefined dynamic_lookup to the linker instead of -shared was easy enough from the Lua mailing list posts on the subject. Figuring out why that didn’t work either meant realizing I’d built Lua itself with -fvisibility=hidden for no good reason at all, causing the dlopen() calls to crash and burn. Figuring out why my self-written pcre interface module randomly crashed with a bad free() during Lua garbage collection meant debugging and valgrinding like nuts until I found out you’re not supposed to link liblua.a to the extension module. Static library + linked to both loader and module = two copies of the code. Anyone’s guess which you get at any given time. One could only wish dyld was able to make itself aware of this ugly situation.

If anyone’s interested, give a holler in the comments and I’ll post the pcre extension as an opensource project. It’s not fully featured (in particular it’s missing partial and DFA matching, as well as access to a few of the more esoteric options of the API), but it does work quite nicely otherwise. It’s built versus PCRE 8.10 and Lua 5.1, and is upwardly compatible with the still-in-progress Lua 5.2 as it currently stands.

I promise I’ll get some work on Missions done this week!

Yet another blog

Well, why not. It took 40 minutes to shut off all the useless crap WordPress installs and turns on by default, and then another 15 minutes to kill the extra stuff DreamHost decided to install on top of that. This is what I call the overproliferation of Web technology; this application is an absolutely perfect example of the evolution of the Web into the single application used for everything you do on a computer. A stateless ancient protocol like HTTP, a twisted screwy markup language like HTML (yes, including XHTML and HTML5), a dangerous and badly misused active content language like JavaScript, absolutely ZERO standardization on audio and video formats (HTML5 lost its focus on Ogg when Apple turned up their noses at it), and even more “operating systems” (Safari, Firefox, IE, all the variants on them, and all the niche browsers) than a real computer (which is at least limited to Windows, Apple, and the *NIX variants). Sure, turn the world into one big network, I have no objection to that, but do it with modern technology instead of clinging to the ARPAnet!