By Casey Liss
 

This week, I had the pleasure of joining my pals Dan Moren, Mikah Sargent, and fellow guest Heather Kelly on this week’s Clockwise. Always a blast, and always 30 minutes or less, Clockwise is a whirlwind, and a ton of fun.

For this episode, we discussed our favorite iPhones, how we protect our privacy online, sleep tracking, and where we enjoy working outside the house. For members, we also discussed our notetaking workflow.

I enjoy Clockwise immensely, even if it is a bit stressful — everything moves so fast! If you haven’t given it a try, you absolutely should.


In November, I purchased a brand-new MacBook Pro, complete with Apple’s fancy new M1 Max processor. This is, without reservation, the best computer I’ve ever owned. It is faster than my beloved iMac Pro, but considerably more portable.

At the time, I was working on what would eventually become MaskerAid. Very quickly upon getting to work on my new computer, I realized that things weren’t working properly on this new machine. After some research, it appeared that some aspects of the Vision Framework were not available on Apple Silicon based Macs.

Apple’s mechanism for providing them feedback is the aptly-named Feedback Assistant (née Radar). It is a full-blown app on macOS/iOS/iPadOS. In fact, if you happen to be on an Apple device, try this link. Radar was a black hole, where issues went to die get marked as duplicates. Feedback Assistant, despite trying to pull the “Xfinity trick”, seems to be the same as it ever was.

Regardless, I filed a Radar Feedback — Apple people, if you happen to read this, it’s FB9738098. When I filed it, back in 3 November 2021, I even included a super-simple sample project to demonstrate the issue.


Apple’s feedback system is fundamentally broken — at least, for everyone who does not work at Apple.

In the roughly 225 days since I filed that feedback, I received precisely zero… well… feedback. Apple is a big company, and surely gets an unimaginable amount of feedback filed every single day. However, I have zero indication that a human has looked at my bug. To me, it went into the black hole, never to return again.

Thankfully, by virtue of my day job, I’ve had the occasion to make the acquaintance of quite a few Apple engineers. I reached out to someone who, let’s just say, should have insight into how to fix my problem. They were very helpful, and very apologetic, but their response was, in my words, “tough shit”.

Sigh.


Fast forward to early this month, and it’s WWDC. One of the best not-so-secret secrets about WWDC is that the labs are where it’s at. You can, from the comfort of your own home, spend ~30 minutes with an Apple engineer that is likely to be intimately familiar with the APIs you’re working with. So, I signed up for a lab to beg for someone to fix my bug.

I didn’t expect much to come of this lab, and I started by telling the engineer I spoke with that I expected it to take just a couple minutes. As I told them, I was just there to beg for them to fix my bug.

The engineer’s response?

“Well, I do think this is only going to be a couple minutes, but it’s better than you think: I have an easy workaround for you!”

🎉


In short, when you make a request of the Vision Framework on an Apple Silicon Mac, it fails every time.

Sample code
/// Asynchronously detects the faces within an image.
/// - Parameter image: Image to detect faces within
/// - Returns: Array of rects that contains faces.
///
/// - Note: The rects that are returned are percentages
///         relative to the source image. For example:
///         `(0.6651394367218018,`
///          `0.527057409286499,`
///          `0.0977390706539154,`
///          `0.1303187608718872)`
static func detectFaces(in image: UIImage) async throws -> [CGRect] {
    typealias RectanglesContinuation = CheckedContinuation<[CGRect], Error>
    
    return try await withCheckedThrowingContinuation { (continuation: CheckedContinuation) in
        guard let cgImage = image.cgImage else {
            print("WARNING: Couldn't get CGImage")
            continuation.resume(throwing: FaceDetectionErrors.couldNotGetCgImageError)
            return
        }
                    
        var retVal: [CGRect] = []
        let request = VNDetectFaceRectanglesRequest { request, error in
            if let error = error {
                print("WARNING: Got an error: \(error)")
//                    continuation.resume(throwing: error)
                return
            }
            
            if let results = request.results as? [VNFaceObservation] {
                retVal.append(contentsOf: results.map(\.boundingBox))
            } else {
                print("WARNING: Results unavailable.")
            }
            
            continuation.resume(returning: retVal)
        }
        
        let handler = VNImageRequestHandler(cgImage: cgImage,
                                            orientation: CGImagePropertyOrientation(image.imageOrientation),
                                            options: [:])
        

        do {
            try handler.perform([request])
        } catch {
            print("ERROR: Request failed: \(error)")
            continuation.resume(throwing: error)
        }
    }
}

The error you receive is as follows:

Request failed: Error Domain=com.apple.vis Code=9 "Could not create inference context" UserInfo={NSLocalizedDescription=Could not create inference context}

Back in my lab, I asked the engineer what they were talking about. As it turns out, I simply needed to add one line, against my instance of VNDetectFaceRectanglesRequest:

request.usesCPUOnly = true

That’s it.

Apparently it will force the Vision Framework to use the CPU and not GPU for its computations. Pretty crummy for a real device, but no problem when you’re just trying things in the Simulator.

Having my problem worked around, in the span of five minutes, with a single-line code change is both delightful and incredibly frustrating.


I got to thinking about this lab again this morning, and I’m pretty upset by it. Ultimately, I got what I wanted, but why couldn’t I have had that OVER TWO HUNDRED DAYS AGO‽ It’s infuriating.

Furthermore, as a parting shot, the engineer asked me if I ever bothered trying to talk to someone by using one of my Tech Support Incidents. The engineer meant it in good faith — they were trying to say that I didn’t have to wait from November → June to get an answer. But in a way, I almost find this more frustrating still.

Why is this the accepted way to get the attention of an engineer? For something as simple as a one-line code change, why are my only two options:

  • Wait for June and hope I get an audience with the right engineer at a lab
  • Use one of my two Technical Support Incidents and hope it’s fruitful… and that I don’t need that one for something else later in the year

Were my problem put on the desk of the right engineer, who was incentivized to provide useful and actionable feedback, it could have been worked around in just a few minutes. I just needed a reply to my feedback with the one-liner.

Unfortunately, Feedback Assistant and Radar are tools for Apple, and they serve Apple’s needs and only Apple’s. They are a complete waste of time for outside developers. I maintain that they are a black hole into which I pour time, effort, sample code, and [often useless] sysdiagnoses. I get nothing in return.

Apple swears up and down that Feedbacks are useful. I’ve been told many times, by many teams, that they also use Feedbacks as a de facto voting mechanism to try to get a pulse on what external developers want. I’ll leave it as an exercise for the reader to think about how utterly broken that is.

Instead, let me make it clear what developers want:

Let’s start there, if you please.


 

Over the weekend, I appeared on John Gruber’s seminal podcast, The Talk Show. On the episode, we kick things off by deliberating the right way to make a martini. Afterwards, we spend some time discussing the complete lack of decent Apple monitors on the market for the last half-decade. Finally, we round out the chat talking a bit about the genesis of, and reception to, MaskerAid.

I had a ton of fun on this one, and it meant a lot to me.


MaskerAid Follow-Up

Yesterday I launched my new app, MaskerAid. It’s too early to tell how the response has been in terms of numbers. In terms of sentiment, however, the response has been great!

If you were still holding out on trying MaskerAid — which is free to try! — you may wish to check out what these fine folks had to say about it:

MaskerAid also seems to have found itself a ton of use cases, other than simply hiding your own children’s faces. Some of these I never expected, and all of them are very clever:

  • Teachers may find that they wish to share shots within their classroom, but don’t necessarily need to fuss with determining which students have social media releases filed.
  • Foster Parents aren’t legally allowed to post photos of the children they’re fostering. Despite, I would imagine, them often (always?) feeling like they’re members of the family.
  • One may find that the profile photo they want to use for dating apps happens to be a group shot. By putting emoji on the other faces, it’s clear who is the one looking for love.
  • Protestors are, sometimes without hyperbole, taking their lives in their hands by standing up for what is right. MaskerAid can be a useful tool to keep their identities private.
  • The same is true of soldiers.
    🇺🇦 I stand with the people of Ukraine. 🇺🇦
  • On a more fun note, MaskerAid is an excellent way to obscure faces in amusement parks, or even better, on rides themselves.
  • If a boudoir photographer wanted to share a photo that’s perhaps just a bit too risqué, MaskerAid can be used to tastefully (or humorously!) cover that which should not be shown.

If you haven’t given it a whirl yet, I’d love for you to give MaskerAid a try.


 

Today, I’m overjoyed to announce my latest app, MaskerAid!

My kids, with emoji in front of their faces.

In short, MaskerAid allows you to quickly and easily add emoji to images. Plus, thanks to the magic of ✨ machine learning ✨, MaskerAid will automatically place emoji over any faces it detects. There’s several reasons you may want to hide a face:

  • The face of a child who is too young to consent to their image being shared
  • The faces of the children in your classroom, or your own classmates, who really don’t need to be in your images
  • The faces of protestors who are standing up against a grotesque war
  • The other faces in a particularly great shot of you, but was taken as part of a group

There are other reasons you may want to simply add an emoji to an image, but not on top of a face:

  • Perhaps you want to point ⬆️👆⬇️👇⬅️➡️👈👉 to something
  • Let’s just say 🍑 + 💨 = 😆
  • Who doesn’t love a ✌️ behind a head?

MaskerAid is free to try but you may only add 🙂 to images. There is a one-time $3 in-app purchase to unlock the rest of the emoji.

MaskerAid app icon

MaskerAid is designed to be a very particular kind of app: do one thing, do it well, and do it quickly.

For me, I really really wanted an app that would let me quickly hide the faces of my children, so I could post family pictures to the internet, but keep their faces private. In much the same way Peek‑a‑View was written to scratch my own itch, so was MaskerAid.

Animated GIF of MaskerAid in use

MaskerAid is free to try, and I’d be honored if you would. If you like it, buy the in-app purchase, and more than anything else, tell your friends!

The Back Story

When my oldest child, Declan, was a baby, we posted pictures of him frequently. Not only were we new parents, but we were first-time parents, and we had just finished a nasty journey. I like to think we earned it.

However, when Declan got to be around four, it occurred to me — much to my dismay — that he was no longer a little squish. He was an honest-to-goodness person, with a personality, desires, and opinions. Which got me to thinking: what if he doesn’t want me posting pictures of him to my social media? Today, he certainly doesn’t care, but what about tomorrow? What about when he’s in high school?

I mostly stopped posting pictures of him, excepting on his birthday. When I did post, I would generally hide his face using an emoji, as such:

This isn’t awful to do on Instagram, but it’s not exactly easy. The best way I had found to do it was to make an Instagram Story, save it, and then use that as your image for your post. It’s a pain.

What I wanted was an app that would let me do a couple things:

  1. Add an arbitrary emoji to an image
  2. Place emoji over the faces within an image automatically
MaskerAid's drawer allowing you to choose an emoji

I figured I could conquer #1, but #2 seemed harder. I know very little about machine learning, but I know enough to know that training a model to recognize faces would be exceedingly difficult.

Then I had an apostrophe epiphany.

Apple has already done the work for me.

I started to make a proof-of-concept using UIKit. I just wanted to be able to put rectangles around the faces detected in a photo. I quickly hit several walls, most of which were probably my fault, but it seemed silly to spin my wheels. So I gave the same task a quick college try in SwiftUI, and it took no time.

I was off to the races. Within the first day, I had the bare-bones proof of concept complete.

But Wait, There’s More

As with all my other apps, I did a private beta test for a handful of trusted friends and some press. Even though the beta only went to about forty people, those testers gave me immensely useful feedback. Myke, in particular, pointed out to me something I should have seen but didn’t. MaskerAid is excellent for adding an emoji anywhere, to any image. Even images without faces can often find themselves in need of an emoji; perhaps for fun, perhaps to hide something that isn’t a face.

Once Myke called this to my attention, I noticed other testers doing the same thing. Suddenly I realized I have a whole new class of user to consider. Ultimately, this new use-case didn’t dramatically change any of my plans for MaskerAid, but it is a testament to:

  1. Always show your work to trusted advisors
  2. You never know how people will want to use your software

Myke also made a second great suggestion. Instead of marketing around a [sort-of] negative of hiding things, why not lean into the fact that MaskerAid can be used to add emoji anywhere? Annoyingly, #mykewasright.

The Work

Two lovebirds

Unfortunately, a bare-bones app is not appropriate for sale in the App Store. It took me from late September until late February to get MaskerAid to the point that I felt like it was ready to be released. I’m sure others would work faster, but there’s a surprising amount of work that goes into making a modern iOS app these days.

Though MaskerAid probably doesn’t look like it took nearly half a year, I assure you I was not just mucking about during that time. Further, this isn’t my first rodeo: not only have I been working on iOS professionally since 2016, this is my fourth app that I’ve released independently.

It’s surprising to me how much time I spent working on what is kinda “administrivia” — things like the in-app purchase flow, making sure the handful of user preferences I keep are saved properly, and updating emoji without having to update the app. (We can all learn from Slack’s mistakes, amirite?)

I don’t say this to complain — by and large the app has been tremendous fun to work on — but more to point out that even “simple” apps have quite a lot going on under the hood.

Other Factoids

For the nerds, here are some tidbids you may find interesting. MaskerAid:

  • Uses async/await semi-liberally
  • Uses Combine occasionally.
  • Is almost exclusively SwiftUI
  • Is exclusively Swift
  • Leverages Gui Rambo’s excellent tip about storing app information in iCloud; this is how I can update emoji without updating the app
  • The first commit was 21 September, 2021
  • The last commit for version 2022.2 — the one released today — was the 203rd, and was made Friday morning.
  • There are 29 closed pull requests (from me to me 🤪)
  • As of writing, there are 64 closed GitHub issues, and 12 open ones.

Some Acknowledgements

Though I wrote every line of code in MaskerAid, I definitely had some help along the way that I haven’t mentioned yet:

  • Ste Grainer provided yet another wonderful icon for me — I’ve relied on Ste for both the Vignette and Peek‑a‑View icons before. However, more critically, MaskerAid was Ste’s idea and I knew immediately that it was the right name for the app.

  • Spencer Wohlers provided many, many useful and actionable bug reports during beta testing.

  • Mark Jeshcke provided nearly as many bug reports, but even more critically, lent his far superior design eye to the app. Thanks to Mark’s ideas and tips, MaskerAid was shaped into something quite a bit more attractive than I could or would have made alone.

  • More than anyone else, my family, for inspiring the app, being patient with me while I worked on it, and just generally being more awesome than I can ever hope to be. 🥰

It’s scary to put something new into the world, but I’m so happy to be able to let MaskerAid out into your hands. I really hope you’ll try it.


I’m working on something new, and as part of that app, I want to be able to save an image. There were a couple gotchas with that:

  1. At first, I wouldn’t get a preview of the image in the share sheet; the user would instead be presented with the app’s icon, which is not helpful.
  2. I also wouldn’t get the option to Save Image, as in, save it to the user’s photo library.

For reference for others today, or me in the future, there are simple fixes to both of these problems.

Seeing a preview image

In order to see a thumbnail — and the file type and size as a subtitle — you cannot pass a UIImage as an activityItem to your UIActivityViewController. Instead, save the file to the local filesystem, and then pass the file URL as your activityItem.

That results in something that looks like this:

Top of a ShareSheet showing the thumbnail, app name, and "JPEG Image • 385 KB"

Note the app name and thumbnail have been obscured deliberately after the screenshot was taken.

Saving to the Photo Library

By default, the ShareSheet does not show the option of saving an image to the user’s photo library. Once you think about it a little bit, it makes sense why, but for the life of me I couldn’t figure out what I needed to do differently.

As it turns out, to enable it, this was a no-code change. I simply needed to add the NSPhotoLibraryAddUsageDescription item in my Info.plist, which is represented as Privacy - Photo Library Additions Usage Description in the Xcode UI.

Once that was added, iOS automatically detects it, and I suddenly had a new entry in my ShareSheet:

The options on a ShareSheet, the second of which is "Save Image"

Both of these were simple fixes, but it took me forever to determine what they were.


PSA: Apple Silicon Users: Update ffmpeg

TLDR: If you run a Mac using Apple Silicon, update ffmpeg to dramatically speed up your encodes.

Late last year I traded in my beloved iMac Pro for an iMac Pro Portable 14" MacBook Pro. I cannot overstate how much I love this machine, and when paired with the LG UltraFine 5K, it is actually a really phenomenal setup. I have nearly all the benefits of my beloved iMac Pro, but I can pick it up and move it without a ridiculous carrying case.

When I got the machine, one of the first things I tried, for speed-testing purposes, was a ffmpeg encode. As has been mentioned before, I use ffmpeg constantly, either directly, or via Don Melton’s amazing other-transcode tool.

Given this was my first Apple Silicon Mac, and I sprung for the M1 Max, I was super excited to see how fast transcodes were going to be on this new hotness.

I was sorely disappointed. It seemed that encodes were capped at a mere 2× — about 60fps.

As it turns out, I wasn’t the only one giving this some serious 🤨. I was pointed to an issue in the repo for the aforementioned other-transcode repository. Many other people thought this looked really weird.

This was first reported in early November, and then about two months ago, the also-excellent Handbrake found a fix, which seemed to be really simple — a very special boolean needed to be set.

Thankfully, about a month ago, ffmpeg patched as well. This was eventually integrated into ffmpeg version 5.0, which was released on 14 January.

However, I install most things using Homebrew, and the Homebrew formula wasn’t updated. Using a neat trick that Homebrew supports, I was able to grab and build the latest (read: HEAD) version of ffmpeg and get fast encodes. However, if you’re not inclined to deal with stuff that fiddly, as of yesterday, the ffmpeg formula has been updated.

So, if you do any transcoding using ffmpeg on your Apple Silicon Mac, now is the time to do a brew upgrade.

Before the new ffmpeg goodies, I topped out encodes at about 2×. Now, using the latest-and-greatest released version of ffmpeg, I am getting quite a bit more than that. On a test mpeg2video file that I recorded using Channels, I was able to get a full 10×. 🎉


 

This week I joined my pals Ben Chapman and “Doctor Don” Schaffner on their fascinating podcast Food Safety Talk. I know; I am surprised as well.

Nevetheless, on this episode, our conversation is wide-ranging and quite entertaining. We begin with Ben playing 20 questions, flailing about, as he tries to figure out who the special guest was. 😆 After that, we discuss my tastes in food, and how close I am to, well, accidentaly poisoning myself.

The conversation is kind of all over the place, and frankly, those are some of the most fun times I have as a guest. Even if you’re not interested in Food Safety Talk, Ben and Don also host Risky or Not, which is a short podcast evaluating the really poor choices of their audience. It’s both quite a fun listen and also mildly horrifying.


 

From the this-may-only-be-useful-to-me department, I recently did the stereotypical programmer thing. I procrastinated from doing what I should be doing by instead automating something that bothered me.

One of the many perks of SwiftUI is how easy it is to preview your designs/layouts. In fact, you can even do so across multiple devices:

struct SomeView: View {
    var body: some View {
        Text("Hello, world")
    }
}

struct SomeViewPreviews: PreviewProvider {
    static var previews: some View {
        Group {
            SomeView()
                .previewProvider("iPhone 13 Pro")
            SomeView()
                .previewProvider("iPhone SE (2nd generation)")
        }
    }
}

The above code would present you with two renders of SomeView: one shown on an iPhone 13 Pro, and one on an iPhone SE.

The problem with this, however, is you need to know the exact right incantation of device name in order to please Xcode/SwiftUI. For some devices, like iPhone 13 Pro, that’s pretty straightforward. For others, like iPhone SE (2nd generation), it’s less so.

The good news is, you can get a list of installed simulators on your machine using this command:

xcrun simctl list devices available

It occurred to me, if I can easily query Xcode for the list of installed simulators, surely I can then convert that list into a Swift enum or equivalent that I can use from my code? Hell, I can even auto-generate this enum every time I build, in order to make sure I always have the latest-and-greatest list for my particular machine available.

Enter installed-simulators. It’s a small Swift command-line app that does exactly that. When run, without any parameters, it spits out a file called Simulators.swift. That file looks like this:

import SwiftUI

enum Simulator {
    static let iPhone8 = PreviewDevice(rawValue: "iPhone 8")
    static let iPhone8Plus = PreviewDevice(rawValue: "iPhone 8 Plus")
    /* ...and so on, and so on... */
}

That makes it super easy to test your SwiftUI views by device, without having to worry about the precisely correct name of the device you’re thinking of:

struct SomeViewPreviews: PreviewProvider {
    static var previews: some View {
        Group {
            SomeView()
                .previewProvider(Simulator.iPhone13Pro)
            SomeView()
                .previewProvider(Simulator.iPhoneSE2ndgeneration)
        }
    }
}

Naturally, I prefer this over the alternative.

Since I’m so used to wielding a hammer, I wrote this as a Swift command-line app rather than a Perl script. Sorry, John. Also, I know effectively nothing about releasing apps of any sort for macOS, so goodness knows if this will work on anyone else’s desk but mine.

Nevertheless, I’ve open-sourced it, and you can find it — as well as some more robust instructions — over at Github.


 

In developing for Apple platforms — particularly iOS — there are many arguments that are disputed with the same fervor as religion or politics. Storyboards are evil, or they’re the only way to write user interfaces. AutoLayout is evil, or it’s the only reasonable way to write modern UI code. SwiftUI is ready for production, or it’s merely a new/shiny distraction from real code. All Swift files should be linted, or only overbearing technical leads bother with linting.

Today, I’d like to dip my toe into the pool by discussing linting. Linters are tools that look at your source code and ensure that very obvious errors are not made, and that a style guide is being followed. As a silly example, both of these pieces of Swift code are valid:

struct Person {
    var id: Int? = nil
}
struct Person {
    var id: Int?
}

A linter would have an opinion about the above. It may encourage you to use the bottom version — var id: Int? — because the explicit initialization of nil is redundant. By default, an Optional will carry the default value of nil, implicitly.

SwiftLint

In my experience, the first time I really ran into a linter was once I started doing Swift development full-time in 2018. The team I was on dabbled lightly in using SwiftLint, the de facto standard linter for Swift projects. The tough thing about swiftlint is that it has a lot of rules available — over 200 as I write this. Many of those rules are… particular. It’s very easy to end up with a very opinionated set of rules that are trying to change your code into something unfamiliar.

Trust me when I say some of these rules are quite a lot to swallow. One of my absolute " favorite " rules is trailing_whitespace, which enforces absolutely no whitespace at the end of a line of code. 🙄

Even if you want to embrace SwiftLint in your project, you then needed to parse through 200+ rules in order to figure out what they are, whether or not they’re useful, and how many times your own existing code violates each one. No thank you.

swiftlint-autodetect

Enter the new project swiftlint-autodetect by professional grump (but actually good guy) Jonathan Wight. This project — as with all clever ideas — is brilliant in its simplicity. When run against an existing codebase, it will run SwiftLint against all rules, and then figures out which ones are not violated at all. These rules that your code is already passing are then output as a ready-to-use SwiftLint configuration file.

swiftlint-autodetect generate /path/to/project/directory

The generated file will have all currently known SwiftLint rules included, but the ones where violations would occur are commented out, so they are ignored by SwiftLint. Using this file, you can integrate SwiftLint into your build process, painlessly, without having to change your code to meet some weird-ass esoteric linting requirement. 😗 👌🏻

Increasing Coverage

I’m very nearly ready to release a new project, and I’m doing some cleanup and refactoring to get ready for its release. I decided to add SwiftLint support using swiftlint-autodetect, but then I wanted to investigate what SwiftLint rules I was violating, but perhaps shouldn’t be.

Conveniently, swiftlint-autodetect has another trick up its sleeve: it can also output a count of the number of violations for each rule. Additionally, it will mark with an * which rules you can instruct SwiftLint to fix automatically using swiftlint --fix. That makes it easy to start at the bottom of the resulting list, where the counts are low, and use that as a guide to slowly layer on more and more SwiftLint rules, as appropriate.

swiftlint-autodetect count /path/to/project/directory

This is exactly what I’ve done: I started with the automatically generated file, and then went up the list that count generated to turn on rules that seemed to be low-hanging fruit. Some I decided to leave disabled; some I decided to enable and bring my code into compliance.

y tho

Thanks to the combination of these two subcommands on swiftlint-autodetect, I am now linting my source code before every build. I’ve fixed some inconsistencies that I know would bother me over time. I’ve also found a couple spots where taking a slightly different approach can help improve performance/consistency.

Because — not despite — I’m an individual developer, I find it’s important to use the tools available to you to help you keep your code clean, correct, and working. Though I don’t deploy every tool under the sun, I do think having some combination of CI, unit testing, and linting is a very great way to use computers as a bit of parachute that, normally, your peer developers would provide.