I’ve been running Home Assistant for a while now, and I had vaguely heard of the ESPHome project, so I looked into it more. Turns out, it’s amazing! In a nutshell, it allows you to create your own custom “smart home” devices from the popular Espressif microcontrollers, with lots of hardware support for different boards and a variety of peripherals. You write a YAML config file that outlines what peripherals are attached and how (what pin, what bus, etc.) and ESPHome will generate, compile, and install a custom firmware blob for your device and integrate it with your Home Assistant installation. Super, super cool.
First things first, how does the thermostat work in one of these glass-door merchandisers? On my particular unit, which is a True GDM-12RF, the thermostat sits up in the top-right corner of the unit, with a long temperature probe coming out of it that snakes back into the evaporator coils. A 120-volt AC wire goes through it, and the thermostat acts as a switch for this wire: if the temperature gets warm, it closes the switch and power flows; when it gets cold, it opens the switch. Down in the base of the machine, that wire is used to power a normal 3-prong outlet which the compressor plugs into. So to create the replacement thermostat, I need to use a relay that will let me use a low-voltage signal to control the high-voltage power line.
Next I needed hardware. Here’s the list of parts I ordered from SparkFun to assemble my thermostat:
I used some spare bits of 12-gauge copper wire I had laying around to make high-voltage connectors for the relay, and soldered some thinner wires to the ESP32 to connect to the low-voltage side of the relay. I chose to put the signal wire on pin 13, which I chose because it also happens to be connected to an on-board LED, so I have a secondary way to tell if the board is trying to power the relay.
Then I needed to flash ESPHome firmware on the device. I tried doing this with a Python-based flasher tool, but couldn’t get it to work correctly, I think because I possibly was trying to use the wrong board ID. I ultimately succeeded by plugging it into my PC and using the ESPHome Web flasher, which uses WebUSB to program the device from Javascript. I have to say, even though I generally think WebUSB is a silly idea, it actually worked and was convenient in this case, so I have to admit maybe it’s not useless.
Once the board was flashed with a “blank” firmware, I could adopt it Home Assistant and configure it with the details about how it’s wired together and how I want it to behave. Below is a portion of my YAML config.
esphome:
name: soda-machine
friendly_name: Soda Machine
esp32:
board: esp32-s2-saola-1
framework:
type: arduino
i2c:
sda: 1
scl: 2
sensor:
- platform: tmp102
id: soda_temp
switch:
- platform: gpio
id: soda_cooler
pin: 13
internal: true
climate:
- platform: thermostat
name: "Thermostat"
sensor: soda_temp
cool_deadband: 2°C
min_idle_time: 60s
min_cooling_off_time: 60s
min_cooling_run_time: 60s
cool_action:
- switch.turn_on: soda_cooler
idle_action:
- switch.turn_off: soda_cooler
visual:
min_temperature: -25°C
For my first attempt, I put all the new components in the fridge near the top, where the original thermostat sat. This seemed to work fine, but after a while the compressor started to perform poorly: it got louder than normal, and sometimes the fan would fail to spin even when power was applied.
(In the photo below, the pink wires are the original thermostat control wire: they lead up into the compartment where the original device was installed)
My suspicion was that something was going wrong with the relay. On a hunch, I held it in my hand for a minute to warm it up, and it started performing better: I guess it doesn’t tolerate the cold very well, despite the datasheet saying it’s rated for as low as -20°C. Shifting things around a bit, I was able to move the relay outside the cold compartment, and things ran much better. I just shut the door directly on the wires: the weatherstripping around the door is flexible enough to make a passable seal over them.
I’ve since moved the whole contraption down to the bottom of the fridge: I don’t think there’s any reason to keep the thermostat up top, especially since the device we’re trying to control is at the bottom. Right now the ESP32 is still inside the cold compartment, but only because the Qwiic cable I have isn’t long enough to keep the temperature sensor inside by itself. That should be easy enough to fix if I solder together some longer wires.
The Home Assistant integration works great: it shows up just like the Nest that controls the air in the house, and it lets me remotely monitor and set the target temperature.
The biggest to-do item is to find a way to power the ESP32 from the same wiring box as the rest of the fridge. Right now I have a separate extension cord just for the USB power adapter, but the wiring box seems to have a knockout where I could install another 3-prong outlet, which would really tidy up the installation. Once I do that, I can put the cover back on the bottom, and it’ll be completely done and can keep our drinks cold for years to come!
Working with ESPHome has been great: I feel like I have a new superpower, and now I’m looking around the house for more opportunities to build custom smart devices. Kudos to the project developers for creating such a great framework!
]]>Here’s what I’ve been reading and watching before and during the social isolation age.
Annihilation. Finally got around to watching this after owning it on iTunes forever. A decent movie, but somehow didn’t fully capture the profound weirdness of the book.
Below Deck Sailing Yacht. A total guilty pleasure that I make no apology for.
Elantris by Brandon Sanderson. I think this was Sanderson’s first Cosmere novel, and it lays out the pattern that other books would build on. Great read.
Upgrade. A fun, but dark, indie sci-fi.
Detective Pikachu. More enjoyable than I honestly expected. Ryan Reynolds as Pikachu shouldn’t work as well as it does.
Cloverfield. I don’t think I’d re-watched this since it originally came out, and I had totally forgotten that T.J. Miller is the “cameraman” for most of the movie.
Psych. A “comfort food” TV show, perfect for rewatching during a global crisis.
Jumanji: Welcome to the Jungle. I saw this in the theater when it came out, and my expectations were low: but it was fun! Rewatching it reminded me that I should watch the sequel.
Back to the Future. We recently set up a surround-sound system at home using some Sonos speakers, so I put this on as a “test,” which was a smashing success.
The Lies of Locke Lamora by Scott Lynch. An exciting heist story in a truly strange sci-fi / fantasy setting. Looking forward to reading the next in the series.
Batman Begins. Another movie to “test” the surround-sound; I didn’t think I’d be so into the home theater setup as I am, but now I don’t think I can go back to my old stereo life.
]]>Here’s what I’ve been reading and watching this January.
His Dark Materials on HBO. I’d been curious about this story since the 2007 movie, but never took the time to read the books. This really well-done and inspred me to finally read the series.
Watchmen on HBO. This was so, so good. Every episode made me say two things: “WTF did I just watch?” and “I cannot wait until the next episode.” We truly live in the golden age of TV.
The Golden Compass, The Subtle Knife, and The Amber Spyglass by Philip Pullman. After watching the HBO show, I couldn’t wait for the next season so I went ahead and read all three books. These were a great read, but now I have no idea how they’re going to film these next two books without spending hundreds of millions of dollars.
For All Mankind. I’m a sucker for both space exploration and alternate histories, so this was a no-brainer for me. Fun, although it gets slow in a few places. Looking forward to season 2.
The Witcher on Netflix. I never played the games—and didn’t realize the games were based on books—but enjoyed this more than I expected. The “Toss A Coin To Your Witcher” song is just as catchy as everyone says.
Ra by Sam Hughes. The premise is fascinating (“What if magic was discovered in the 1970s and was treated as a rigorous science?”) but gets into “hard” sci-fi territory, which I find kind of tedious. I finished it but would find it hard to recommend unless you like that particluar style.
The Last Wish: Introducing the Witcher by Andrzej Sapkowski. Again, watched the show, didn’t want to wait for a new season, so now I’m working through the books. I haven’t finished this yet (only about halfway through) but it’s been just as enjoyable as the show and I’ll probably keep reading in the series.
]]>For example, if you’re poking around a new machine and find a device with vendor ID 0x8086
and don’t know who that is? Query for TXT
records on <vendor>.pci.id.icw.cz
like so:
$ dig 8086.pci.id.ucw.cz TXT +short
"i=Intel Corporation"
You can also get the name of a specific device by adding the device ID as another sub-domain, like <device>.<vendor>.pci.id.icw.cz
. For example, if you had a device with vendor ID 0x8086
and device ID 0x101a
:
$ dig 101a.8086.pci.id.ucw.cz TXT +short
"i=82547EI Gigabit Ethernet Controller (Mobile)"
There are several more kinds of queries you can make; the source to PCI Utilities, specifically the function pci_id_net_lookup()
reveals how to build the appropriate ‘domain name’ to query.
I also took this chance to make the design a bit more complex while staying responsive, which would have been way harder without the use of CSS flexbox, which made this surprisingly easy.
I also borrowed some really handy design tips from this great article, 7 Practical Tips for Cheating at Design. I think my last design definitely committed a few of the sins they recommend against in the article.
Anyway, watch out for wet paint or layout bugs while the new design settles in.
]]>Adding this to Jekyll wasn’t too bad (you can see the Git commit for yourself), although I did have to learn some new Liquid template tricks, like using the jsonify
filter, the unless
tag, and the forloop
object.
I don’t know if JSON Feed is going to take off in a big way or not, but it was fun to implement. Check out the spec if you’re interested in adding it to your project.
]]>Laura and I spent our fifth wedding anniversary in Maui, Hawaii. It was amazing. This was our second-ever trip to Hawaii, and I can’t imagine it’ll be our last: we had a lot of fun the first time, too.
I took my drone with me, naturally, along with my GoPro, and I think I got some pretty good videos out of them. Mix in some footage from my iPhone, add some music from Musicbed, and you get the video above.
I am so happy with how this turned out. I had been reviewing the footage even while we were still in Maui, and been messing with color correction just to see what I was working with, and I knew the images were good. But once I found the right song and started timing the cuts to the music, it instantly become something awesome.
It was a bit of a trick to get the video to line up with the music so closely, especially the fast cuts at 1:28
and 2:24
. What I finally settled on was using the music file as the main storyline in Final Cut, and then placing the video in as attached storylines. That way, I could fine-tune specific cuts (like those two fast cut parts) and then leave them completely alone while I edit other parts.
I decided to go for a really wide aspect ratio of 2.35:1
, so most of the footage actually has the top and bottom cropped out a bit. This actually turned out to be an advantage, because it meant I could do tricks like keyframing the vertical position of the crop, adding what looks like camera movement that wasn’t originally in the shot. You can see some examples of this in the first clip flying backwards over the beach, the shot pulling away from the boat at 2:22
, and a few other spots if you’re looking closely.
Enjoy the video, hopefully it’s as fun to watch as it was to make.
PS: If you want to find out right away when I post new videos like this, you can follow me on Vimeo or Twitter. I also sometimes edit these videos to fit on Instagram, too.
]]>The straight-down shot can be pretty cool. In this particular shot, I didn’t actually move the drone vertically, but used the Ken Burns effect to fake the zoom. I think the next time I use this angle, I’ll do it over a subject with more motion.
I wanted to see if I could get away with using the drone as a way to fake getting a dolly shot. It worked, but it might not be the most practical thing on-set, what with the spinning blades and all.
Both this shot and the one starting at 0:38 are portions from a long clip. When I started recording, my intent was fly forwards and descend (basically the opposite of the final shot).
Halfway down, Laura and the dogs started playing fetch out in the grass, so I decided to abandon the high-concept shot and just focus on the action.
I tilted the gimbal down, so the camera is 45-ish degrees off the horizontal. This turned out to be a great angle, and I’ll definitely be using it again. I think it’s more visually interesting than aiming perfectly level at the horizon, but not as disorienting as pointing straight down.
This is some more great fetch-playing from the same clip as the shot starting at 0:11. I love the orbit-style shot, so I wanted to try flying one. There’s actually a built-in function in the Phantom software to do this sort of effect, but it requires a bunch of setup and by then the opportunity would’ve been gone, so I just piloted it by the seat of my pants.
I’m not gonna lie, this shot is shamelessly stolen the final shot in this YouTube video, which is so great. In both shots, the speed actually ramps up after a few seconds, once the people are small enough that you won’t really notice the speedup.
]]>This is my first video that I’ve shot with an ND filter, and I have to say that I’m pretty happy with how this turned out. I don’t think I would have been able to point the camera right into the sun and still capture so much detail, or get that nice slow 24 FPS, without the ND filter to cut the light down.
Also, I’m not sure if I’m just picking bad export settings or what, but it seems like every social network (Twitter, Instagram, Facebook) all ruin your video with awful compression. That’s part of my motivation for cross-posting videos here, on my website: I’m not going to take perfectly good video and smash it down to cut bandwidth costs.
For reference, I edited this in Final Cut Pro at 29.97 FPS (which I think is the frame rate Instagram uses) and 1920x1080 resolution, then exported it to H.264 using FCP’s “Master File” setting. The output was a 16 Mbit/s .mov file, around 31 MB large. I tried running the result through Compressor, which did get the filesize down to roughly 9 MB but the quality was noticeably worse, especially in the shadows.
Are there better settings I should be using to avoid the ugly compression after uploading?
]]>Our setup is pretty simple: a laptop, a Yeti microphone, and the three of us huddled around it. I have some wild idea that someday we’ll each have our own mic and be sitting in real sound booths, like an honest-to-god radio show, but we haven’t reached that level of crazy yet. The multiple-mic setup would be handy, though, because I think it would give us more options during editing: for example, the volume of each host’s voice could be adjusted independently.
Recording the show has gotten more complex since then (we have videos now; check out our YouTube channel), but the multi-microphone dream has become a reality. Now we have two Yeti microphones, and we record them simultaneously.
This wasn’t easy to get configured, though. There isn’t a lot of information about this online, so I’m writing this mostly as a warning to others and as bait for Google.
TL;DR version: out of the box, you cannot use multiple Yeti microphones on the same Mac.
The longer version: it is possible, but you’ll need to send one of the mics back to Blue, the manufacturer, for a firmware update.
There are plenty of tutorials online that show you how to record multiple USB mics at once using your Mac. They usually go something like this:
Scott and I have exactly the same microphone: the Blue Yeti. When we tried to follow these steps, things stopped working around step 4: only one microphone would show up in the list as an available device. Each one worked by itself, but if we tried to use both at once, only one would actually work.
USB devices may have serial numbers. For the devices that have them, those serial numbers would ideally be unique to every individual device: a special snowflake, no two alike.
In reality, this isn’t always the case. Sometimes two devices will have the same serial number. This may or may not be a problem. In our particular case with Yetis, it’s a problem.
If you plug a Yeti microphone into your Mac and open the System Information app, you can actually see the serial number for yourself. Highlight the Yeti Stereo Microphone entry, and you’ll see a line that says something like
Serial Number: REV8
I tried this with both my microphone and Scott’s, and they both were using
REV8
as their serial number. Whoops! I suspect all Yeti mics use this number.
After emailing the support team at Blue, they told me they could update my mic with a new serial number: I just needed to provide proof-of-purchase, mail them my microphone, and they’d send it back with a new serial number.
It took a week or two, but this definitely did the trick: Blue reprogrammed
my mic to use the serial number 777
, and we’ve been successfully using
both microphones together ever since.
Other models from Blue may also be affected; I’ve seen people online mention very similar issues when trying to use two Snowball microphones together, and I suspect it’s the same underlying problem.
This problem was surprisingly hard to track down, so hopefully this helps someone else figure out their microphone woes.
]]>I thought it would be interesting to look behind the scenes of Low Earth Orbit. Inspired by This American Life and their excellent comic about their production process, Radio: An Illustrated Guide, I’ll outline the life of a typical episode from concept to completion.
Every episode starts out as a humble topic suggestion.
We use a private wiki to collaborate on various parts of the show, and we have several pages in the wiki dedicated to topic brainstorming. We segregate the topics into broad categories: movies, games, book, and everything else.
Whenever possible, we try to review things soon after they’re released. Movies are the easiest: they have a release date that we know about well in advance (usually on a Friday), it only takes a few hours to watch the film, we can record on Saturday or Sunday and then post the episode on Tuesday. By the time the MP3 lands on a listener’s device, the movie is still a very fresh topic.
Such promptness is harder with games, but not impossible. We reviewed the latest SimCity back in September: it was released on a Thursday and we had our review published on Tuesday.
The real difference between games and movies is the time commitment. The Last of Us, a recent PlayStation 3 game, takes roughly 15 hours to complete. Compare that to the not-quite-2-hours of the Ender’s Game movie. It’s logistically and physically impossible for me to do anything for 15 hours straight, so it takes multiple days to play a game to completion. For most of our game reviews, we’ve had to pass judgement without having seen the end, simply because we don’t have time to get there.
So far we’ve only done one book episode (and it was really a comic series, not a novel), but I imagine that they would have an even larger time cost than video games.
Ultimately, we choose topics based on what all three of us are interested in and can commit time to reviewing. Scheduling is a lot more important here than you might guess: being able to predict far in advance what topics we’ll be covering helps make the whole production feel less rushed.
Barring any scheduling conflicts, we usually record on Sundays.
There’s no secret formula here, it just works out well for everyone: all three of us are usually free on Sunday evening.
We’ve recorded at all three of our apartments. So far, the Voss place has hosted the most, because our spare bedroom converts into a recording studio much better than anyone expected.
Our setup is pretty simple: a laptop, a Yeti microphone, and the three of us huddled around it. I have some wild idea that someday we’ll each have our own mic and be sitting in real sound booths, like an honest-to-god radio show, but we haven’t reached that level of crazy yet. The multiple-mic setup would be handy, though, because I think it would give us more options during editing: for example, the volume of each host’s voice could be adjusted independently.
The “one weird trick” to getting good audio outside of a professional studio is to get rid of echoes. You can shell out big money for fancy audio foam panels, but blankets and clothes work fine, too: even Randall Beach, “The Voice of NPR,” did all his recordings in a coat closet.
We do three things to our room to get it into studio-mode:
We take the fuzziest blanket we have and drape it over the desk, then set the microphone on top. This helps eliminate echo from the desk surface.
A very generous coworker donated two fancy-pants audio foam panels to us, and we place these right behind the microphone. This prevents echoes from the computer monitor and wall behind the desk.
We drape blankets over some custom PVC frames that I built, and position those stands behind and around our chairs, as closely as we comfortably can.
Those PVC frames have been amazing. They were cheap and easy to build, and they deliver great results. The construction is simple enough: they’re basically just big rectangles with feet.
The blankets are held in place with a few clamps at the top.
It’s hard to see from these photos, but this room is especially nice because it has a carpeted floor, a daybed in the corner with a fluffy comforter, and no noisy appliances like a refrigerator or air conditioner.
Scott does all the editing in Logic Pro X.
I can’t claim to know much about this area, but I do know that one of the things we do to our audio is apply compression. Unfortunately, the word “compression” means a lot of different things in different contexts. In this case I’m talking about dynamic range compression, which basically means “the softest sound and the loudest sound shouldn’t be that different.”
For example, in our first few episodes, it was a bit hard to hear us when we were speaking normally, but then it was ear-splittingly loud when we would laugh. That’s because the dynamic range was large: the soft speaking sounds were much quieter than the loud laughing sounds. Using compression makes the dynamic range smaller: the speaking becomes louder and the laughing quieter, so our listeners don’t have to constantly adjust the volume.
It’s possible to use software filters to remove background noise and echo, but it’s much easier to just record better audio in the first place by moving to a different room or setting up blankets.
The technical details of publishing a podcast, from hosting MP3 files to writing the RSS feed, are easy to find so I won’t reiterate them here.
Our website is created with Jekyll, a tool for building websites entirely from static files. Jekyll is a very nerdy way to publish a website, but I like it because it’s so easy to manage. There’s no server process to monitor, no database to back up, no fear that a link from Daring Fireball will bring down the server, just a directory of static HTML files.
To streamline the process, I wrote some shell scripts for creating episode files
in Jekyll and publishing the site to the server. These scripts are basically
just wrappers around curl
and rsync
but sometimes that’s all you need to be useful.
The MP3 files themselves are hosted on Amazon S3 because, well, everything is on Amazon Web Services these days. And because it’s reliable: we don’t have to worry about it going down, or exceeding our bandwidth quota. And it’s cheap. Hypothetically we could use a CDN like CloudFront for even faster downloads, but I don’t think anyone is complaining about our download speed.
We embed an inline audio player using audio.js. This makes it easier for website visitors who aren’t subscribers to listen to individual episodes.
I have a charts-and-graphs problem. Like a “the first step is admitting you have a problem” problem.
Naturally, I want to be able to track as many stats as possible about the show. How many listeners do we have? Which episodes are the most popular? How are people finding out about us?
Unfortunately, it’s pretty hard to measure that for podcasts.
The website is easy to track: throw Google Analytics in there and you can find out 99% of what you want to know. Tracking the audio is harder: you’re basically reduced to parsing through Apache logs, looking at user-agents and IP addresses to guess how many unique listeners there are.
That’s not to say there are no useful insights to be had from server logs. Using those logs, we’re able to estimate both the number of subscribers (aka, people who have added Low Earth Orbit to iTunes or Instacast or what have you) and to estimate how many listens each episode is getting. Those two numbers aren’t the same because a non-subscriber can listen to individual episodes on our website, which increases the number of listens but not the number of subscribers.
I wrote a set of Python scripts to gather up all this data and format it nicely. Every night, these scripts publish a web page with pretty charts and graphs of listens, and send me an email with subscriber numbers.
Tracking subscribers is easy enough with 2000s-era techniques: look at the
Apache request logs for /podcast.xml
and count up the requests, using the IP
address and user-agent to prevent duplicates. I also do some filtering based on
user-agent to exclude things that may request our RSS feed but aren’t really
people: for example, I exclude the Googlebot and the iTunes Store servers.
Tracking downloads is just as simple: for each MP3, take the total number of outgoing bytes and divide it by the number of bytes in the file. It’s possible to parse the S3 logs yourself, but we use Qloudstat to do that for us and access their JSON API instead.
You may be wondering “why don’t you just count the number of requests for each MP3?” We can’t do that because some podcast apps use range requests to download the episodes, which means they download the file in many small chunks instead of one big request. A single user listening to a single episode may make hundreds of requests. But, they probably won’t download the same data twice (or at least, they won’t do that very often), so the “total bytes divided by filesize” trick gets us in the ballpark.
There are about a million other things I would love to be able to measure but they either aren’t technically possible or would require us to stop using Jekyll, so I’ll settle for these two stats for now.
Almost all of our website traffic comes from Twitter.
We have an official show Twitter account, @lowearthshow. Every episode gets tweeted at least once, and we’ve even had some high-profile retweets: when we posted our first SimCity episode, the general manager of Maxis Emeryville retweeted us.
Other than announcing episodes on social media, I’ll admit that I have no idea how to promote a podcast. I don’t think we have any world-domination plans: I certainly don’t think we’ll get as big as juggernauts like Radiolab or This American Life, but it’s fun to see our audience slowly grow.
So, what’s next for Low Earth Orbit? I don’t know.
Podcasting as a whole seems like it’s heating up again. People are talking about podcasts more, several well-known developers are working on podcast apps, and the tools for creating audio have never been better. It’s a new golden age.
All I know for sure is that now is a very exciting time to be a podcast listener and a podcast creator.
]]>Instead of giving you numbers, I want to tell the story of the whole year, month-by-month. Brace yourself, it’s going to be a long post.
Side Note: It was harder than I thought to remember everything that happened in an entire year. To recall everything, Laura and I looked at our iPhoto library and Twitter history; if that’s not a sign of the times, I don’t know what is.
Neither of us can remember where we were for last New Year’s Eve; we might have had people over to our apartment? Later in January, I traveled to Virginia to visit CARFAX headquarters, and returned to Missouri just in time for our Employee Appreciation Party (also nicknamed “CARFAX Prom”).
In February one of my last freelance projects, Skip Tunes, was released on the Mac App Store and was well-received. Laura left her job at the non-profit Marine Parents, and we both travelled to California for the first time, for my on-site interview at Apple.
By March we were already making plans to move to California; we started packing things up around the apartment and tried to coordinate with all the vendors that were going to make our move possible. On the 30th, my co-workers had a going-away dinner for me at Flat Branch, and the 31st was my last day at CARFAX. That night, we had all our Columbia friends over to our apartment one last time.
April may have been the busiest month: the month we moved. For the first week, we visited our families in Cape Girardeau. Then we went back to Columbia to do last-minute packing on the 9th, had everything moved out on the 10th, flew to California on the 11th, went apartment hunting on the 12th, visited San Francisco that weekend, and my first day at Apple was that Monday. Let me tell you, that was a whirlwind week! Before the month was done, we had already been hiking at Big Basin and been to the beach at Santa Cruz. On the 30th, I turned 25 years old. Laura surprised me with a birthday slideshow my family and friends had made for me; you guys are the best.
In May we moved into our new apartment; it didn’t take long before we got our first IKEA furniture (a couch). The car was packed completely full; Laura even had to hold boxes in her lap to get everything to fit. Laura started her job at Henri Bendel. On the 20th there was a solar eclipse. From the 25th to 28th we were back in Missouri for our friends’ wedding, and to see our families.
It seems like June was dominated by WWDC. A big group of people we knew from Missouri were in town, so we made a lot of trips up to San Francisco to see them. We added another piece of IKEA furniture (a daybed, for guests). Kyle and Stephen came along with us to see the Golden Gate Bridge and Sausalito, we visited our friend Tom at Facebook headquarters, and bumped into Mark Zuckerberg while we were there.
We spent the 4th of July up in San Francisco, where it was so cold Laura had to buy a scarf: I’ve never been shivering on the Fourth before. Later in the month, Laura’s whole family came to visit for a week, so we took them to Ghirardelli Square and some other SF sights. The day after they left, my brother paid me a surprise visit, and stayed until the end of the month. We introduced Jordan to what’s probably our favorite restaurant in the universe, The Tonga Room.
In August Laura left Henri Bendel to start her new job as a manager at Snap Fitness. She had to go to Minnesota for training, so she spent her 24th birthday on the road.
In early September, we went up to San Francisco to see Samsara, and got to hear a short introduction from the filmmakers themselves. Laura and I ran (well, “ran”) in the Color Me Rad 5K together, and Laura ran in the Title Nine 9K. On the 20th, my parents came to visit, so we went up to San Francisco to see the Painted Ladies and drink at the Monk’s Kettle. Laura went to Las Vegas for a Snap conference, and actually won money at the casino!
Our first wedding anniversary was October 1st. We spent the weekend in wine country, and managed to not visit any wineries but did sneak into a few breweries (we still need to go back and actually have wine). In the middle of the month, Laura’s friend Nicole came to visit us. We attempted to go to Fleet Week up in San Francisco, but couldn’t handle the crowds, so we hung out with friends instead. On the 20th we went camping in Yosemite; it was Laura’s first camping trip, and it was even in bear country. We came back to civilization and hosted a Halloween party where we tried to dress up as Nathan and Chloe from Uncharted (I think we did a pretty good job).
Laura went back to Missouri for the first part of November, both for her sister’s birthday and a friend’s wedding. We upgraded from a double to a queen mattress and bed (IKEA loves us). We weren’t able to go back to Missouri for Thanksgiving, but our friend Scott introduced us to his circle of family friends and we spent the holiday with them; they were so kind to us and made us feel welcome. I know I have a lot to be thankful for.
December was another busy month: on the 1st we went to a Mad Men theme party, where I realized Laura can pull off 60s hair really well. We put up and decorated our Christmas tree, and visited a lot of friends here in California. Then we headed back East to Missouri for Christmas; we were both glad to spend some more time with our families. I got to see Cape’s new casino for the first time (and threw away $40 at the craps table). When we got back, I spent a weekend at Tahoe with some of the guys, and then we had a few people over for New Year’s at our apartment.
And that’s it: an entire year in just over 1000 words.
]]>Greg Dougherty, my client, has done a great job getting press attention for the app. So far, it’s been mentioned on The Unofficial Apple Weblog, CNET, Lifehacker, Cult of Mac, Macworld (3.5 out of 5 Mice) and a few other websites. Not long after launching, it even climbed to the #1 Music app in nine different countries!
Warning: serious technical details ahead.
When I started working on the custom UI for the app, I tried to use the AppKit collection of classes:
NSView
, NSButton
, NSTextField
, and friends. It didn’t take long before I was really frustrated with
how hard it was to customize the appearance of these controls. For example, on iOS it’s easy to use
a custom image as the background of a UIButton
, you just call setBackgroundImage:forState:
.
Using that method, you can even specify two different images: one to use normally, and another
to use when the control is actually being pressed.
Getting the same thing done with NSButton
is not so easy.
The most straightfoward way to get a custom image is to subclass NSButtonCell
and
override it’s drawBezelWithFrame:inView:
method, which is not at all obvious if
you’re not familiar with the way NSControl
uses cells for drawing.
Then, for each button, you have to instruct it to use your custom
cell class instead of the default class.
To show a different image when the button is pressed, your implementation of that
cell method has to inspect the isHighlighted
property to see if the user
is holding down the mouse button.
When drawing that NSImage
, you need to make sure you use the right drawing method that respects
the flipped or non-flipped setting for the image and the graphics context.
None of the above is rocket science, but it’s a lot of work for what seems like a really easy task. (Kudos to the UIKit team for taking the opportunity to rethink and clean up these interfaces.)
Instead of wrangling all this myself, I took a shortcut and used an open source framework called Chameleon.
Created by The Iconfactory, Chameleon is a re-implementation of a big chunk of UIKit on top of AppKit. In a nutshell, it lets developers write iOS code that runs on OS X.
Chameleon really shines when you want to use the same code to produce both an iOS and a Mac app, but in this case it was worth it just to use the better APIs from UIKit.
I was able to write most of the Skip Tunes user interface
using UIView
, UIButton
, UILabel
, and even UIViewController
in addition to AppKit
classes like NSStatusItem
and NSWorkspace
.
If I had to do it over again, I think I would still use Chameleon, although I think I would try a bit harder to get AppKit to behave. I generally don’t like using cross-platform toolkits, but in this case the result was good enough to make up for it.
The job of actually controlling and inspecting the media players falls onto AppleScript. Luckily, all three apps have similar scripting interfaces, so it wasn’t hard to create an abstraction layer on top of them.
Along the way, I learned a couple of neat tricks about Scriping Bridge, which is an API for using AppleScript from Objective-C.
To generate header files for each app, I used two command line tools: sdef
and sdp
.
Together, they take the scripting definition for an app and output an Objective-C .h
file that you can include in your app. Here’s how I generated the header for iTunes:
sdef /Applications/iTunes.app/ | sdp -fh --basename iTunes -o ~/Desktop/iTunes.h
The basename
is used to generate some of the object names in that header file.
In the above example, the iTunes.h
file contains classes named iTunesApplication
,
iTunesPlaylist
, etc.
I also learned that the SBApplication
object, which represents your app’s connection
to the scriptable app, can have a delegate that it reports errors to. This was
really helpful during development, because I could see where things were going wrong.
If your app needs to respond to the behaviour of other apps, the way that Skip Tunes
needs to respond to the media player starting, stopping, or changing tracks, you
need to see if NSDistributedNotificationCenter
has the information you need.
Some apps will broadcast notifications over this channel about their state, which can
save your app from doing nasty polling.
For example, whenever the state of playback changes, iTunes publishes a
com.apple.iTunes.playerInfo
notification on this notification center. Instead of
checking iTunes’s state over Scripting Bridge every second, I just register for
this notification and wait to hear back.
If you go down this route, though, make sure to listen to NSWorkspace
notifications,
too, since opening and closing apps doesn’t usually send those state change notifications.
You’re still here? Go buy Skip Tunes, already!
]]>Tweets by @justinvoss: approx. 600
Visits to justinvoss.com from Reddit: 374
Money made on the iOS App Store: $207.87
Albums on vinyl acquired: 9
Books read in The Dark Tower series: 7
Video games completed: 6
Google Reader subscribers for justinvoss.com (including me): 5
Apps released: 4
Transmissions repaired: 1 (details)
Marriages: 1 :)
]]>localhost
.
Some iOS features, though, are hard to test in the simulator.
Barcode scanning, accelerometer tracking, and other hardware features
can only be tested on a real device. But running the app on a device
means you can’t just say localhost
and expect to connect to your laptop;
you’ll need a way to reference your development machine without using the IP
address, which changes too often to be useful.
This is exactly what Bonjour is for. Bonjour is Apple’s implementation of “zero configuation networking,” which lets apps publish and browse network services.
Among other things, Bonjour gives your machine a hostname in the .local
domain, which stays constant even as your IP changes.
You can find your machine’s Bonjour hostname in the Sharing pane of System Preferences.
You can see that my laptop goes by the name justin-macbook-pro.local
. In my iPhone app,
instead of connecting to localhost:8000
, I’ll instead connect to justin-macbook-pro.local:8000
.
There is no Step 2; it’s really that easy! No matter how many times my laptop changes
it’s IP address, that hostname will always resolve to the right machine. Now I can run
my iPhone app on a real device and it’s almost as easy as when everything was on localhost
(you do have to remember to tell Django to listen on all interfaces: just say
manage.py runserver 0.0.0.0:8000
).
There’s an obvious downside, though: now my machine name is in the code. What if I’m not the only developer on the project? Is everyone going to maintain their own version of the configuration, with their own hostname in place of mine? That would work, but it just feels icky, to use a technical term.
What if we didn’t have to hard-code anything at all, and the iPhone app could discover the server totally automatically?
Let’s publish our Django server as a full-fledged Bonjour service, and then write some iOS code to browse for it.
Apple has some Objective-C APIs for publishing services, and there’s at least
one Python library that claims to do the same, but there’s an even easier way:
a commandline tool called dns-sd
(read the man page for details).
In a nutshell, invoking dns-sd
follows this format:
dns-sd -R <name> <type> <domain> <port>
Breaking it down a piece at a time:
-R
means we want to register (i.e., publish) a service._http._tcp
. You can and should make up your own types, like _myapp._tcp
.local
.So, our hypothetical Django server would be published like so:
dns-sd -R "App API on Justin's MacBook" _myapp._tcp local 8000
To make life easier, I’ve written a Python script called runserver-bonjour
that will both run your Django project and publish it using Bonjour.
A Bonjour type, like _http._tcp
, is used to filter for specific services.
Unless you’re trying to be a drop-in replacement for something else, your app
should have it’s own unique type. In these examples, I’m using _myapp._tcp
.
There’s a list of all registered types that has lots of examples.
You might be thinking, “Wait, aren’t we just an HTTP server? Why not use _http._tcp
?”
We don’t want to do that, because the fact that our service uses HTTP as a transport
should be considered an implementation detail. The _http._tcp
type has a very specific use:
it’s for servers that deliver web pages intended to be displayed in a browser.
That’s not what our server does: it’s an API, not web pages, so that’s not the right type for us.
To use a real example, when you enable iTunes music sharing, your copy of iTunes
starts up an embedded web server: the music sharing is actually done over HTTP.
But iTunes uses the Bonjour type _daap._tcp
to distinguish itself from other HTTP servers.
Your app should use the same strategy.
Finding a Bonjour service isn’t too hard, but it does involve two layers
of delegation, which can make the code hard to follow. There are two phases:
in the first, we’ll browse for any services that match the type we want (_myapp._tcp
);
then, we’ll pick a single service and resolve it, which will tell us the hostname
and port to connect to.
To browse for services, you need an NSNetServiceBrowser
. Give it a delegate,
then tell it what type of services you want (FYI, all these code samples are using ARC).
Passing in an empty string for the domain means you want the default domain, local
.
As the service browser finds services, it notifies the delegate. It calls the delegate once per service, so if you want to do something with the whole collection of them, you need to maintain a list yourself.
The moreComing
parameter is a hint that there may or may not be more pending calls to this method.
Apple recommends using this flag to determine when to update your UI, but in this
case I’m using it to decide when to stop browsing and start resolving.
In some apps, you might want to let the user choose which service to use. For our purposes, we can assume there’s usually only one service, so we’ll just grab the first one and resolve it. This is where the second level of delegation happens: the service needs a delegate to notify when it finishes resolving.
At this point, you have all the info you need to connect to this service.
I just take the hostname and port, slap them together with +[NSString stringWithFormat:]
,
then create an NSURL
from that.
I put together a class to handle the details for me, called RBServerLocator
(The RB
prefix is to match runserver-bonjour
). Using it is as easy as specifying
the type and providing a completion block that takes a resolved service.
There’s even a category on NSNetService
to create an HTTP URL for you.
This short trip through Bonjour doesn’t nearly do it justice: it’s a powerful toolkit for sharing and discovering other apps and devices. Apple’s Bonjour Overview can take you an a deep dive into everything it has to offer.
]]>One handy trick is an algorithm called a low-pass filter. This basically takes a stream of data and filters out everything but the low-frequency signal. Effectively, this “smooths” out the data by taking out the jittery, high-frequency noise.
Like I said before, my math skills are weak, but looking at the algorithmic implementation on Wikipedia it seems like it’s basically a weighted average: some of the data is from the previously filtered value, and some of the data is from the raw stream. Here’s my breakdown of what each piece means:
alpha
value determines exactly how much weight to give the previous data vs the raw data.dt
is how much time elapsed between samples.RC
value is for, but by playing with it’s value it’s possible to control the aggressiveness of the filter:
bigger values mean smoother output.So, how can we apply this to something useful?
If you haven’t already seen GitHub’s 404 page, go take a look: make sure you move your mouse around the page. See how the images move around in parallax? If you visit the same page on your smartphone, it can even use the accelerometer in your device to make the images move.
Update: It appears the mobile version no longer uses the accelerometer. Darn!
I thought it would be neat to replicate the effect in a native iOS app. Grabbing data from the accelerometer and shifting the views around is pretty easy, but the result is poor. The accelerometer is way too sensitive to small movements, and it makes the app feel over-caffeinated.
Low-pass filtering to the rescue!
Here’s a before-and-after video. Each version of the app is using the same recorded accelerometer data, running in a 10-second loop. The version on the left is using the raw data, while the version on the right is using data run through a low-pass filter.
The results should speak for themselves: the filtered data is a little slower to react, but moves smoothly and deliberately. The raw data jumps and twitches too much for comfort.
First things first, let’s get some accelerometer data.
I put this in my view controller’s viewWillAppear:
method.
The updateViewsWithFilteredAcceleration:
does the actual filtering.
And finally, the updateViewsWithAcceleration:
method actually moves the center points of the views.
Pretty simple, right? The math isn’t that bad, and it gives us a great result. This is just a small example of what real math can do for your app. In the next few months, I’ll try to do some more posts about using advanced math and statistics to make more intelligent systems.
If anyone’s interested, I can do a follow-up post about how I recorded the accelerometer for playback later.
]]>UIFont
isn’t always as much fun.
The tricky part that gets me every time is getting the font name right: it has little or
nothing to do with the filename of the font. How am I supposed to figure out what name to use?!
It turns out the developers at Apple are way ahead of me: the easiest way to get the
name right it just ask the UIFont
class what names it knows.
Thanks to Richard Warrender’s article about custom fonts on iOS, I discovered the
awesome +[UIFont familyNames]
and +[UIFont fontNamesForFamilyName:]
methods. The first
returns a list of every font family the system recognizes. The second takes one of those family
names and returns a list of font names associated with it. The font name is the one you should
give to +[UIFont fontWithName:size:]
to get your custom font.
You still have to do the regular “add your font file to the project and specify it in your
Info.plist
” dance, but that’s not the hard part.
Hopefully this saves you some time and frustration!
]]>This handy little app can change your desktop background automatically or on-demand, and it pulls all of it’s content from SimpleDesktops.com, impeccably curated by Tom Watson.
Big props go out to Greg Aker for server monkeying and Louis Harboe for the smashing icon.
Buy it, rate it, tell your friends, email me with any feedback.
]]>A data source is a species of delegate. It’s a separate object that provides data to another according to a defined standard. That standard can be formal, like an Objective-C protocol, but doesn’t need to be.
The most common UIKit class that consumes data sources is UITableView. Typically, the view controller of a particular screen will be both the delegate and the data source for a UITableView that takes up most or all of screen. This pattern is so common, in fact, that Apple includes a view controller class called UITableViewController that does all this wiring for you!
To provide data to the table view, the data source implements at least these methods:
The first method, tableView:numberOfRowsInSection:
, is called by the table view to
determine how many rows are in the section with the given index. By default, a table view
has only one section, so in the simplest case the section
argument will always be zero.1
The data source should consult whatever data it has and return the number of rows the
table view should display.
The second method, tableView:cellForRowAtIndexPath:
, is how the table view will determine
what content to display for a particular row in the table. The object returned here is an
instance of UITableViewCell
, which is a fully-fledged UIView
subclass.
The indexPath
argument is essentially a list of integers that describes the section and
row that the table view needs a cell for. To get the individual values, call the section
and row
methods:2
The actual implementation of tableView:cellForRowAtIndexPath:
is specific to the data you
want to display. As an extremely simple example, here’s what it might look like if you wanted
a plain table view and your data is simply an array of strings, in the names
attribute.
The business with dequeueReusableCellWithIdentifier:
is a caching mechanism that UITableView
uses to avoid creating more cells than is strictly necessary. Since only a handful of cells will
ever be on-screen at any given instant, there’s no reason to create any more than a handful of cell
objects. As soon as a cell moves off-screen, it becomes reusable as a cell that’s about to come on-screen.
You might be thinking that this seems like overkill. Why create a separate object and two separate methods for controlling some cells in a list?
The biggest reason is performance. For a small amount of data, the benefits of using the data source pattern won’t be clear. But look at how many songs are in your iTunes library on your iDevice (you can see the total in Settings > General > About). Right now, I have 1,297 songs on my iPhone. When I tap on the Songs tab in the iPod app, only about eight rows are on-screen at a time. If every song was loaded in-memory, 1,289 of them would be wasted space because the user can’t see them.
Instead, the only pieces of information that need to be calculated are:
The total is an easily-cached number, since the music library on iOS is relatively static, and the first few rows are quick to look up in SQLite.3
Besides the memory usage issues, it may be that not all of the data is known at the time that the table view needs to be displayed. If each row has an image that needs to be downloaded from a web server, you obviously may not want to fetch all the images up front: it would be wiser to wait until that particular row is on-screen before starting the download.
Obviously, not every view in UIKit uses data sources: simple views like UIImageView
just have a property that represents
the data they need to display. Even a view as complex as MKMapView
doesn’t use a data source: each pin on the map
is added manually with addAnnotation:
or in bulk by setting the annotations
property.4
The right time to reach for the data source pattern is when performance is an issue and the data lends itself well to
being loaded piece-by-piece.
Besides performance, separating the presentation of the data from the mechanics of loading it is just good engineering. The view only needs to know how to display the data. Where it comes from and how much of it is loaded is outside the scope of that the user interface needs to worry about.
While writing your app, you may want to create a view that has the same constraints as a table view: it needs to potentially display a lot of data, but only a fraction of it will be needed at any given moment. Sounds like a job for a data source!
The first step is to back away from the code and head for the whiteboard. Figure out your strategy for how the view
will interact with the data source; this will be different for every situation. When Kyle and I wrote the grid view
in Kowabunga, called LOGridView
, we decided to use a data source because we knew that at some point we would want multiple pages of
icons. Our strategy was to create a simplified version of the UITableView
data source. The two methods are:
The similarity between this and a table view data source should be obvious. The first method alone is enough for the grid view to calculate how many pages of icons will be needed: it just divides the total by twelve, rounded up.
As each page of icons comes on-screen, the grid view asks the data source for more cells. The first page asks for cells 0-11, the second pages asks for cells 12-23, etc.
After the grid view has been put on-screen and the data source is wired up, we call reloadData
on the grid view
to start laying out the grid. Each page in the view is an instance of LOGridViewPage
, which is a private class
that LOGridView
uses to organize the cells. Each page has it’s own reloadData
method, which is called on
each page as it’s displayed:
The heart of the method is the loop, where we load only the cells we need for this particular page. Each page has attributes for the grid view, the data source, and the start and end indices, so it has enough information to build up its grid of icons.
Your view will need a similar method, although the way you determine what to display and how you ask for it will
be different. Maybe instead of discrete pages, like Kowabunga, your app has a more fluid layout like UITableView
.
In that case, you may need to calculate the start and end indices based on what’s on-screen.
In general, you’ll need to understand both how the data is retrieved and how it’s displayed before you can determine the right way to coordinate the two.
That’s it for this week. Leave a comment below if I glossed over any details or made any mistakes (as if that ever happens…)
Let me know what you want to read about next week! I might change gears and talk about a non-Cocoa topic, like Django or Coffeescript. It’s up to you!
If you need to have more than one section, the numberOfSectionsInTableView:
will
let you control how many are displayed. The tableView:numberOfRowsInSection:
will be called
once for each section. ↩
The section
and row
methods actually aren’t in the default implementation of
NSIndexPath
: they’re added by a UIKit category. These two helper methods are just
wrappers around the indexAtPosition:
method. ↩
That’s assuming, of course, that the library data is in a SQLite database. It very well may not be, I don’t know. The data in your own apps probably will be, since that’s one of the storage backends Core Data uses. ↩
According to Apple’s documentation, you should add all the annotations at once, even if they’re not on-screen. As the user moves the map around, the map view will notify the delegate and ask it to create the pin views as-needed. So you could consider this a hybrid model of delegation and using a data source. ↩
To illustrate how to implement delegates, I’ll talk briefly about both the Core Location and Address Book frameworks that are included in the iOS SDK.
A delegate is a helper object that can react to or control events happening in another object.
For example, a UITableView
object notifies it’s delegate whenever the user taps on a row in the table,
asks it’s delegate what height to use for each row, and informs the delegate when cells are edited.1
Unlike the target-action pattern, the delegate doesn’t get to choose it’s own method names: they’re defined by the class that uses it. Often the delegate is expected to implement more than one method in order to have the most control over the other object. A common pattern is to have one method for successful completion of a task, and another method for failure.
APIs that rely on hardware or the network often use delegation as way to provide asynchronous responses. When using the Core Location framework, your code may start a request for GPS data, but the chip needs a while to warm up and connect to the satellites. Instead of blocking or polling the hardware, your code provides delegate methods that will be notified when the location data is ready to be used.
The viewDidAppear:
method starts up the location framework, which abstracts away the details of the
GPS (or cell tower triangulation). The startUpdatingLocation
method returns immediately, so your
app’s main thread won’t be blocked.
At some point in the future, after the framework has determined the user’s location, the
locationManager:didUpdateToLocation:fromLocation:
method will be called.2 As the user moves around,
the location manager will continue to call this method on the delegate until you ask it to stop
updating the location.
When working with delegates, you’ll hear a lot about Objective-C protocols. In a nutshell, a protocol is a lot like an interface in other object-oriented languages. Some methods may be required, others may be marked as optional.
To declare that your class conforms to a particular protocol, you put the name of the protocol in angle brackets after the superclass name.
If your app needs to interact with the user’s contact list, you can access the data programmatically using the Address Book and Address Book UI frameworks. The first is designed to give your app access to the underlying contact data. The second is a set of pre-built views and interface elements for displaying, editing, and choosing contacts.
When displaying the Address Book UI views, your code participates by setting itself as a delegate of the Apple-provided views. As the user makes their selections, your delegate will be notified and have the opportunity to affect the workflow.
To prompt the user to choose a property (like phone number) for a contact, you have to create an ABPeoplePickerNavigationController
and give it a delegate. In Photo Dialer’s case, that delegate is a AddContactDelegate
.
Here’s the actual code that one of the view controllers uses to present the UI:
The actual display of the user’s contacts, and managing the stack of views involved, is handled entirely by Apple’s code. The only time Photo Dialer has to worry about it is in the delegate methods. As the user selects a contact, selects a phone number, or presses “cancel”, the delegate is notified.
After some of the methods, the delegate can tell the UI to stop allowing the user to drill down. This doesn’t dismiss the view, however: you still have to manually remove it from the screen.
The specifics of what all the Address Book objects represent isn’t important, except that an
ABRecordRef
represents a particular person, and the ABMultiValueIdentifier
specifies which
of potentially many phone numbers the user tapped on.
By wrapping these views in a reusable class and providing a mechanism for a delegate to participate, Apple has allowed us to remove a lot of code that we would normally have to write ourselves.
When designing your own objects, take a minute to consider if the app-specific features could be implemented by a delegate, leaving reusable code in the original class. For example, if your app uses WebSockets to connect to a live stream of data, split the WebSocket-specifics into a generic class that delegates to an app-specific class. You might find that with just a few delegate methods, most of the code can be reused in another app without changes: just give it a different delegate.
When using a delegate from your class, keep a few tips and tricks in mind:
respondsToSelector:
to make sure the delegate supports the method you’re about to call.What do you want to read for next Friday’s article? I’m thinking either data sources, or diving into something more advanced, like working with REST APIs or Bonjour networking. Cast your vote for next week’s topic in the comments!
Table views in particular have two helper objects: the delegate and the datasource. The datasource is what the table view uses to determine what information to display, while the delegate controls almost every other aspect of the view. ↩
Not only could there be several seconds between the two method calls above, but iOS will ask the user for their permission before revealing their GPS location. If the user denies you, the second method may never be called at all! ↩
A selector is essentially the name of an Objective-C method, realized as an object in code.
It has the type SEL
, which is a primitive (no memory management needed).
The actual contents of the selector are opaque, but you can get one from a method name with
the @selector()
syntax.
You can also get a selector from an NSString. This is handy if you want to store the selector in a config file.
Once you have the selector, there’s a few things you can do with it.
You can ask an object if it responds to that selector; this is like asking if the object implements this method. If you’ve ever done reflection in Java, prepare for a breath of fresh air!
You can ask an object to perform that selector:
This is the same as calling the method directly, but because the selector could come from a variable, it’s possible to change the method at runtime.
For example, UIBarButtonItem
uses a target and action to call your code when the button is tapped.
You might have some code in your view controller like this:
When you run the app and tap on the button, you’ll see “The done button was tapped!” in the console.
It’s as if the button has a line of code like [controller doneButtonHit:self]
, but obviously it
doesn’t: the button is a generic object that you can use off-the-shelf. The secret sauce is performSelector
!
This pattern, called “target-action”, is used throughout Cocoa, especially in user interface code. It allows the UI widgets to stay generic, while making it easy to integrate your custom controller without needing to subclass anything.
Let’s write a really simple object that implements the target-action pattern. We’ll call it a button,
but we’ll skip writing any view-related code. For simplicity, let’s assume that when the user taps
on the button, the button will receive the tap
message.
Here’s our interface.
And here’s the implementation.
The init method is simple, just some vanilla setup. We store the target and action for later. Notice that we don’t retain the target.
In the tap
method, we ask the target to perform the action. We also pass the button as an argument to
the action. There’s a bunch of variations on the basic performSelector:
method to help with things
like passing arguments or delaying before performing: check Apple’s documentation for all of them.
The controller wired up to this button might look like this:
If you look back at the example with UIBarButtonItem
, you’ll see that they’re almost identical!
Mimicking Apple’s code is easier than it sounds, right? :)
The best time to reach for this pattern is when:
If the generic object has more than one action to perform, or needs to collect information from your custom code, your problem is probably better solved with the delegate pattern or the data source pattern. I’ll cover those in later articles.
]]>