Monthly Archives: May 2010

Missions of the Reliant: All hands, abandon ship, I repeat, all hands-

The laser cannon now works, and can destroy fighters.

The sound playback is still just slightly off, but otherwise all is good. The display issues with the laser and target scanner are fixed, the damage coefficients are corrected (the only reason the laser works too well right now is the fighters haven’t had their shields set to the correct type yet).

My current short-to-medium-term list of things to work on:

  1. Fix the sound issue with the laser.
  2. Implement the shield type selection for fighters.
  3. Implement torpedo holds, torpedo launchers, and torpedos.
  4. Implement the various scanner systems (lrs, sector, viewscreen) for the ship (this means they’ll be able to take damage and go offline as intended).
  5. Implement life support. This also involves being able to take damage and go offline, making crew members start dying.
  6. Implement the computer. This represents the final missing system on the player’s ship and will also give me an excuse to implement the keyboard command interface.
  7. Implement the crew members. With the keyboard command interface in place, this is just a matter of tying them into existing code.

After that comes messaging capabilities (“Captain, the laser cannon is offline!”, “Captain, we’ve received a message from Alliance HQ.”), more weapons (missiles, tractor beam, the battleship planet-buster – I plan to call it an Illudium Q-36 Explosive Space Modulator, and a secret easter egg I’m not gonna tell you about), the rest of the enemy types (battleship, cruiser, rebel cruiser, laser satellite), the interface for hailing and interacting with a base (repairs, medical facilities, mat-trans, crew transfer), and the various items that add bonuses to the player’s ship (hyperspace jump drive, super laser couplings, laser and shield types, etc.). Then comes the overall mission logic, and that’s effectively the whole game. It looks like a lot, but about 70-80% of the framework is now in place.

Oh, and when I do the cloaking device, instead of just an expanding wave, it’ll be an alpha-blended 75%-opaque expanding wave over a fading-out ship. OpenGL makes this kinda coolness so very easy.

Stay tuned for more news about my progress.

Missions of the Reliant: Too late. Hang on!

The Reliant’s laser cannon is now functional. It fires from the wrong spot on the ship, hits the wrong spot on the enemy ships, has the wrong idea about when the enemy ships are in and out of range, plays its sound incorrectly, and doesn’t look quite like the original game’s laser, but it does work, and all but the last of those are trivial fixes.

As for that last, well, there’s this problem of Mike having taken advantage of old technology.

See, in the original game, the line that forms the laser would be drawn in one of two colors, then erased, and it was up to QuickDraw how quickly those pixels were seen by the user. The result in practical use was a semi-random flickering of the laser beam in and out, and a significant (while purely illusory) blending of the two colors. However, I use OpenGL to draw the lines and have no provision for erasing them, so the result is a far more solid line where both colors of the laser are easily visible. I’ll have to experiment a bit with OpenGL modes to fix it.

But the laser does work!

Missions of the Reliant: Engine room, flight recorder visual, fifty-one point nine one zero

A QTKit-based video recorder is now integrated into the code. I tried about twenty ways to get it to record audio too, but between CoreAudio’s failings and QTKit’s limitations, nothing both sounded correct and remained correctly synchronized.

  1. Capture the sound output of the game and add it as a sound track to the video. Failure reason: CoreAudio provides insufficient API to do this when using OpenAL.
  2. Pipe the sound output through SoundFlower and add it as a sound track to the video. Because OpenAL is broken on OS X, this necessitated changing the *system* default audio output device to use SoundFlower. Failure reason: Because video was recorded one frame at a time, with the accompanying delays necessary to compress each frame, while the audio was recorded in realtime, synchronization was impossible.
  3. Pipe the output through SoundFlower and manipulate the audio data to solve the synchronization issues. Failure reason: QTKit, unlike the original QuickTime API, provides no API whatsoever for manipulating raw audio data in a movie.
  4. Add the sounds used by the game as tracks to the video. Failure reason: QTKit’s API again proved unequal to the task, even in the Snow Leopard version and using SPI, an approach quickly abandoned.
  5. Record each sound event, construct a sound track from those events, and add that track to the video. Failure reason: QTKit’s API once again.
  6. Forgo QTKit entirely and use FFmpeg to do the media authoring. Failure reason: The documented -itsoffset flag is not implemented by the FFmpeg commandline driver, nor correctly supported by the supporting libraries.
  7. Manually manipulate every input sound file to have the necessary time of silence at the beginning, then pipe through FFmpeg or QTKit. Failure reason: The entire effort was becoming ridiculous, and I felt my time would be better spent working on the actual game and worrying about something like that much later, especially since there was no need for it at all.

In every case, QTKit either had no API to accomplish the task, or its provided APIs didn’t work correctly, as with FFmpeg. I wasn’t able to drop back to the old QuickTime API because it isn’t supported in 64-bit code and I intended this game to be forward-compatible.

There was one interesting side note to all this. In the process of recording video frames, I naturally ran into the issue that OpenGL and QuickTime have flipped coordinate systems relative to each other. Rather than play around with matrices, I wrote a quick in-place pixel flipping routine:

– (void)addFrameFromOpenGLAreaOrig:(NSRect)rect { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSUInteger w = rect.size.width, h = rect.size.height, rowBytes = w * sizeof(uint32_t), i = 0, j = 0, rowQs = rowBytes >> 3; void *bytes = [[NSMutableData dataWithLength:h * rowBytes] mutableBytes]; uint64_t *p = (uint64 *)bytes, *r = NULL, *s = NULL; NSImage *image = [[[NSImage alloc] init] autorelease]; glReadPixels(rect.origin.x, rect.origin.y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, bytes); for (i = 0; i < h >> 1; ++i) for (j = 0, r = p + (i * rowQs), s = p + ((h – i) * rowQs); j < rowQs; ++j, ++r, ++s) *r ^= *s, *s ^= *r, *r ^= *s; [image addRepresentation:[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:(unsigned char **)&bytes pixelsWide:w pixelsHigh:h bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bitmapFormat:0 bytesPerRow:rowBytes bitsPerPixel:32] autorelease]]; [self addFrame:image]; [pool drain]; } [/objc]

No doubt the more skilled among you can see the ridiculous inefficiency of that approach. Through staring at the code a great deal, I was able to reduce it to:

– (void)addFrameFromOpenGLArea:(NSRect)rect { // All of this code assumes at least 16-byte aligned width and height // Start r at top row. Start s at bottom row. // For each row, swap rowBytes bytes (in 8-byte chunks) of r and s, incrementing r and s. // Width = the number of 8-byte chunks in two rows (rb = w * 4, rq = rb / 8, times two rows = ((w*4)/8)*2 = (w/2)*2 = w NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSUInteger w = rect.size.width, h = rect.size.height, i; uint64_t *p = malloc(h * w << 2), *r = p, *s = p + (h * (w >> 1)); NSImage *image = [[[NSImage alloc] init] autorelease]; glReadPixels(rect.origin.x, rect.origin.y, w, h, GL_RGBA, GL_UNSIGNED_BYTE, p); for (; s > r; s -= w) for (i = 0; i < w; i += 2) *r ^= *s, *s ^= *r, *r++ ^= *s++; [image addRepresentation:[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:(unsigned char **)&p pixelsWide:w pixelsHigh:h bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bitmapFormat:0 bytesPerRow:w << 2 bitsPerPixel:32] autorelease]]; [self addFrame:image]; free(p); [pool drain]; } [/objc]

Much better, but still pretty inefficient when the size of every single frame is the same. Why keep redoing all those width/height calculations and buffer allocation and defeat loop unrolling? So I wrote a specialized version for 640×480 frames, with all the numbers precalculated.

– (void)addFrameFromOpenGL640480 { NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; if (frameBuffer == NULL) frameBuffer = malloc(1228800); glReadPixels(0, 0, 640, 480, GL_RGBA, GL_UNSIGNED_BYTE, frameBuffer); register uint64_t i, *r = frameBuffer, *s = r + 153280; for (; s > r; s -= 640) for (i = 0; i < 40; ++i) { *r ^= *s, *s ^= *r, *r++ ^= *s++; *r ^= *s, *s ^= *r, *r++ ^= *s++; *r ^= *s, *s ^= *r, *r++ ^= *s++; *r ^= *s, *s ^= *r, *r++ ^= *s++; *r ^= *s, *s ^= *r, *r++ ^= *s++; *r ^= *s, *s ^= *r, *r++ ^= *s++; *r ^= *s, *s ^= *r, *r++ ^= *s++; *r ^= *s, *s ^= *r, *r++ ^= *s++; } NSImage *image = [[[NSImage alloc] init] autorelease]; [image addRepresentation:[[[NSBitmapImageRep alloc] initWithBitmapDataPlanes:(unsigned char **)&frameBuffer pixelsWide:640 pixelsHigh:480 bitsPerSample:8 samplesPerPixel:4 hasAlpha:YES isPlanar:NO colorSpaceName:NSDeviceRGBColorSpace bitmapFormat:0 bytesPerRow:2560 bitsPerPixel:32] autorelease]]; [self addFrame:image]; [pool drain]; } [/objc]

I took a look at the code the compiler produces at -O2, and I’m fairly sure that its assembly will run parallelized, though not actually vectorized.

Yes, I’m fully aware that glReadPixels() is slower than creating a texture. I was testing my optimization skill on the low-level C stuff, not the entire routine. I only regret I didn’t have the patience to try doing it in raw SSE3 assembly, because I recognize an algorithm like this as being ideally suited to vector operations.

Missions of the Reliant: Their coil emissions are normal.

More status!

  1. The radar is implemented and functioning.
  2. A whole list of off-by-one pixel errors are fixed.
  3. A subtle retain cycle KVO crash is fixed.
  4. Most of the target scanner bugs are fixed.

I say “most” in that last because I’m not sure if the final bug can be fixed. The cocoa-dev mailing list seems dubious (click the link for a description of the problem). If there isn’t a method, I’ll lose a bit of look-and-feel in the target scanner, hardly showstopping but definitely annoying.

Screenshots of the working radar coming soon!

Missions of the Reliant: They’re locking phasers.

“Lock phasers on target.” – Khan
“Locking phasers on target.” – Joachim
“They’re locking phasers.” – Spock
“Raise shields!” – Kirk
“FIRE!” – Khan

The Reliant now has targetting and scanning systems implemented. There’s still several bugs to work out, but the basic system is in place. When that one little fighter I put in as a test shows up, the ship can lock onto it. Of course there’s no indication on the radar (since there isn’t a radar yet) and no way to destroy it (since there isn’t a laser cannon – though there are laser couplings – or torpedo holds or torpedo launchers yet), but at least we can scan it! Or we could if fighters weren’t always unscannable. Oh well.

Still, that little flashing box on top of the fighter is darn aggressive.

The reason I don’t have more to show than a buggy targeting system is I spent the majority of the time implementing it also working out a huge mess of memory management bugs I’d been ignoring since day one. Leaks, retain cycles, overreleases, you name it. What kills me is that the Leaks tool missed all but a very few of them. I ended up with manual debugging of retain counts by calls to backtrace_symbols_fd(). As uuuuuuuuuuuuuuuuuugly as lions (Whoopi Goldberg, eat your heart out). In the end a few tweaks to the way things were done were in order. Too much work being done in -dealloc when I had a perfectly good -teardown method handy that functions much like the -invalidate suggested by the GC manual.

Why aren’t I using GC and saving myself this kinda trouble? Frankly, given my current understanding of things, I think GC would be even more trouble than this! This, at least, I understand quite thoroughly, and I have considerable experience dealing with the issues that arise. I know how to manage weak references properly to avoid retain cycles and how to do a proper finalize-vs-release model. I haven’t even gotten tripped up by hidden retains in blocks more than once! Yes, I screwed it up badly here, but that’s because I was paying very little attention. I do know how to do it right if I try, and now I’m trying.

Garbage collection, on the other hand, is a largely unknown beast to me, and from what I’ve read on Apple’s mailing lists, the docs Apple provides are very little help to developers new to the tech. The hidden gotchas are nasty devils, much worse than hidden retains in blocks. Interior pointers and missed root objects come to mind, especially since I’m targeting 10.5 where GC support was still new and several bugs in it were known to exist (and may still). Apple chose to provide an automatic stack-and-heap scanning collector, whereas I would only have been comfortable with a manual heap-scanning collector, which is really little more than autorelease anyway. In such a light, the model I’m familiar with and clearly understand seemed a much better choice than trying to learn an entirely new paradigm for this project. Ironically, I still chafe at manual memory management in C++ projects, especially the lack of autorelease, and as with GC, I don’t understand such things as auto_ptr and shared_ptr well enough to get any use of them. Templates make me cringe.

With the targeting scanner implemented, all I need to do is debug it. The next step will be to write the radar, so as to double-check that the fighter AI is working as it should and that the target scanner is de-targeting properly when something falls out of range. After that I need to test the scanner versus multiple targets, especially the new smart-targeting mode I’ve added as an easter egg. What can I say, it always drove me nuts that it targeted “the next enemy in the internal array of enemies” rather than “the nearest enemy to my ship”. But finding how to enable it is left as an exercise to you nostalgic people like me who’ll actually play this port :-).

Come on, iTunes. Jesse Hold On – B*Witched? *punches the shuffle button* 太陽と月 – 合田彩. Much better! Sorry, interlude *sweat*.

Anyway, once the scanner can handle multiple targets, it’s time to implement the third and final component of the laser: the cannon. Time to blow things up with multicolored hypotenuses of triangles! I might even study up on a little QTKit so I can take movies from the OpenGL context to show off. Bandwidth, though; this blog isn’t exactly hosted off DreamHost. (Linode actually, and they’re really awesome). Oh well, we’ll see. Maybe I’ll even leave the feature in as another easter egg…

To summarize, the current plan is:

  1. Fix bugs in target scanner.
  2. Implement radar.
  3. Spawn multiple targets for the target scanner.
  4. Implement laser cannon.
  5. Maybe implement movie capture of gameplay.
  6. ???
  7. Profit!

I’m not making it up as I go, I swear!

Missions of the Reliant: Watch it, you’ve got one on your tail!

Missions of the Reliant version 3.0 now has the framework for enemies, enemy AI, and those infinitely annoying little fighters that everything lauches in droves at you and you can only hit by draining all the charge from your laser couplings. It took some work, let me tell you.

Mike’s original code expresses differences between facing angles as a function of which sprite is being displayed. Efficient. My code expresses differences between facing angles as atan2(-distanceY, distanceX). Mathematically correct and conceptually accurate.

‘Course, then “am I facing the player within a 120-degree arc?” becomes a completely different numerical test. Didn’t help that you kept setting variables you never used anywhere, Mike :-). Originally, I tried to use pretty much identical code for enemy movement as player movement, but it became clear that there were just too many quirks to it.

The fighters especially have an interesting AI about moving:

  1. Calculate distance from me to the player. If too high, return to origin, else continue. (Target determination.)
  2. Calculate angle from me to the player. If I’m not facing that angle exactly (difference between facing and calculated != 0), turn towards it. (Aim.)
  3. If I’m facing the player within 120° of arc, apply thrust N. (Thrust.)
  4. If I’m within distance D1 of the player, reduce speed by 10%. (Falloff.)
  5. If I’m within distance D2 of the player, and I have a loaded torpedo, and I’m facing the player exactly as in step 2, fire my weapon. (Attack.)
  6. Repeat every tick (or every other tick depending on preference setting) of the game timer.

Sounds simple? The fighter has to track its target, its current vector, its ideal vector, the difference between those two, its distance from target, and its firing delay, and constantly adjust all parameters accordingly. The code specifically intended for fighters is 118 lines including comments. Add 134 lines for code that applies to every enemy, 176 lines for code that applies to all game objects, 50 lines for code that applies to anything physics-capable, and 87 more lines for code that applies to everything that needs to do things based on the game timer. Subtract roughly 50 from that for comments and you get 500 lines of code just to drive that simple little AI, not counting the game timer itself or any of the drawing logic.

And I haven’t implemented the weapon, collision detection for the weapon, or the return to origin logic yet. And it’s got a couple of glitches as is, like if the fighter happens to spawn at exactly a 90, 180, 270, or 0-degree angle to the player while said player is at a full stop, it moves in a straight line rather than arcing like it should. Not sure if that bug’s in the original game since it’s next to impossible to set that particular scenario up deliberately without hacking source.

Did I mention that due to several stupid errors on my part, I ended up having to go back to the original fighter model in Infini-D and re-render it, then use Photoshop Elements to recreate the 36-phase sprite from that render? That was a fun two hours. On the other hand, the fighter model looks more “cool” now. It’s also a little harder to see. Sigh.

If anyone’s interested, here’s a couple of screenshots. I’d post a movie but I don’t have any instantly available way of making one and I’m too lazy to pursue the less instant ways.

Missions of the Reliant Fighter 1 Missions of the Reliant Fighter 2

The utility of a scripting language.

I feel like quite a geek. I had some text copied from my IRC client that I wanted to transform to XML for my XSLT sheet to display all nicely on the Web interface. Format of a line copied from the client:

altered nickname<tab><tab>message<tab>hh:mm:ss<space><AM or PM><carriage return>

Correctly formatted XML for the XSLT sheet:

<message><time>unix timestamp</time></time><type>2</type><sender>correct nickname</sender><content>message</content></message>

How to transform this? I could’ve done the majority of the work with a PCRE regexp and search/replace, but that wouldn’t have fixed the nicknames (since you can’t make if/else decisions in a replace in most editors) or calculated the correct UNIX timestamps. So I turned to scripting, of course. Some would have chosen to use Ruby, others Python, or Perl, or possibly even bash for some masochistic reason. I chose PHP.

Took five minutes, most of which was spent constructing the regexp. The code:


$conversation = file_get_contents(__FILE__, false, NULL, __COMPILER_HALT_OFFSET__);
$valid_nicks = "nick1|nick2|nick3|nick4|nick5";
preg_match_all('/^('.$valid_nicks.')(?:\t+)(&#91;^\n\t&#93;+)(?:\t+)(\d+):(\d+):(\d+)&#91; &#93;(&#91;AP&#93;M)$/mSu', $conversation, $matches, PREG_SET_ORDER);
$xml = "";
$time = time();
foreach ($matches as $splitline) {
    $nick = $splitline&#91;1&#93;;
    $message = $splitline&#91;2&#93;;
    $hour = $splitline&#91;3&#93;;
    $minute = $splitline&#91;4&#93;;
    $second = $splitline&#91;5&#93;;
    $meridian = $splitline&#91;6&#93;;
    if ($nick === 'nick1' || $nick === 'nick2') {
        $nick = 'real_nick1and2';
    } else if ($nick === 'nick3' || $nick === 'nick4') {
        $nick = 'real_nick3and4';
    ++$time; //mktime($hour + ($meridian === 'PM' ? 12 : 0), $minute, $second, date('n'), $meridian === 'PM' ? 1 : 2, date('Y')));
    $xml .= "<message><time>{$time}</time><type>2</type><sender>{$nick}</sender><content>{$message}</content></message>\n";

echo $xml;

// the conversation was pasted here

I daresay that was a pretty cheaply elegant bit of work, if I may be allowed to pat myself on the back. Entirely trivial stuff, but it shows how useful scripting can be for some tasks. How inane would that conversion have been, replacing the nicks by hand and calculating the timestamps one at a time? The conversation was about 500 lines long. Yay scripting.

Please, don’t comment with a one line Perl script to do the same thing from STDIN, I’m well aware you can use Perl to compress any complexity down to what looks like a couple hundred bps of line noise :-D.

P.S.: I am fully aware that the code has several inefficiencies, odd-seeming decisions, things that could’ve been done better, and so on, and so on. Who cares? It works. It’s not meant to win design awards.