Looking back over what I’ve (mostly privately) written about XOXO I’m surprised to find a clear arc. The ethos of the festival has both tracked with and helped guide my return freelancing. Now that the final XOXO is over I want to gather my thoughts on my three times attending and how they've helped me over the years.


In 2018, at my first XOXO, I had a strong sense of not belonging. At the time it was the biggest version of festival and I felt like I’d been lucky to have somehow snuck into the party with the cool kids. I had a full time job and didn’t really make much online. Attending was aspirational. I’m not even sure I wrote an #intro post that year. The whole thing was overwhelming, in a good way.

In retrospect I found it a deeply moving experience. In full compliance with the XOXO Dream™️ it was one of the catalyzing moments for returning to the freelance world. XOXO also helped show me that I wasn’t alone, everyone, speakers included, felt imposter syndrome to some degree. The festival was really the first time I'd met other very online folks in person.


I left my full time job in June of 2019 and promptly went back to Portland for my second XOXO. There's an optimism throughout the whole event that's impossible not to get swept up in1, I'd missed the energy from the previous trip. That year I wrote an #intro, and had a few projects I'd begun working on for both clients and myself.

spongebob meme: five years later

During the intervening years I started many new projects, most of which I'm still working on. I wrote several applications, created pen plotter art, modeled in CAD for 3D printing, designed PCBs for electronics projects, and more. A lot of these projects were connected to my "real" profession as a digital tech, but others (read: the plotter art) were just for the sake of trying something new.


In short, I felt I was ready for XOXO this year. The community on Slack was a sounding board and support group, especially during the pandemic. The idea of returning to Portland, one more time, felt like going home2.

This year I think I was, for the first time, able to put most of my imposter syndrome aside. I was reunited with friends from previous years, and not just ‘met’ new people but I think made even more friends; people I’m going to follow and advocate for online as much as I can. It was nice to be around all of the folks from the internet again.

The other thing, and this is the self-serving part of the post, is that people recognized me from the internet. It’s one of the things I’d written about from my first XOXO, that everyone seemed to be a notable person online. I don’t have ‘an audience’ but to be recognized for your work is always flattering, and for me was a little bewildering.

I think the reason is that I've become much more willing to share my projects, and it’s because I was nurtured by one of the most supportive communities around that I had the confidence to publish my projects and to be proud of them. To have the support to find my niche and voice online. XOXO, in a sense, found me when I needed it.

Something I've taken away this year, especially after watching so many of the Art & Code talks is wanting to make more things that aren't necessarily practical. I'd like to work on projects that explore a concept, teach me something new, or are just fun. I'm not sure what those projects might be, but I'm looking forward to finding out.


  1. One could, without the benefit of hindsight, say it was infectious.

  2. For multiple reasons, Portland will always be home.

All the way back in May I got a bad idea: Capture One for the GameBoy.

Obviously not all of Capture One, but some sort of controller. While bored one evening I built a very non-functional proof-of-concept GameBoy game with GB Studio demonstrating what the features of the "game" would be. In short it would allow you to add color tags and star ratings to images. As a bonus I wanted the whole system to be (optionally) wireless.

A couple of rabbit holes later exploring GameBoy link cable communication I'd discovered that it uses SPI, which is easy to work with. Sending data via the GameBoy link port isn't actually all that difficult. GB Studio has an easy drag-and-drop node you can use to transmit (and receive) data. Unfortunately the initial tests weren't great. GB Studio, as a high-level programming environment expects a specific format (and response) for the transmitted data, whereas all I wanted to do was send a couple of bytes of information. I tried over and over to figure out how to make GB Studio happy, but in the end never could get it working reliably.

Amazingly there's an entire SDK for the GameBoy: GBDK. It turned out not to be all that much work to get the project set up, and having written C++ for the Raspberry Pi Pico before really helped. The game is fairly simple, it sits in a loop waiting for input and teleports the "player" between positions. With reliable link communication ready the next step was doing something with that data.

In the "game" you play as the cursor, navigating the wide world of Capture One. The system lets you change image selection with up and down, set color tags and rating with Ⓐ and clear the tag or rating with Ⓑ. When you press a button the game sends the data over the link cable to whatever's on the other side.

The adapter does two things: listen for data, either from the link cable or the radio, and pass that onto the computer via USB. Coincidentally Apple had just officially announced embedded Swift, so I figured this would be a fun project to test out programming for the RP2040 in something other than C++. After a little bit of messing around with the makefile for building the binary I was even able to share1 some code between the firmware and client library.

I decided to reuse a lot of the communication protocol I'd built for the GPI Controller2, and even added the link adapter into ControlRoom.

The final piece of the puzzle was the hardware. I designed a simple PCB for the Tiny2040, nRF24L01 radio, and definitely official GameBoy link port. All of the components are easy to solder by hand, so I put together a little kit that incudes all of the electronics and a couple of 3D printed panels.

A friend, and fellow digital tech, had asked where you'd hold the whole assembly when using it wirelessly. I'd been thinking you'd leave it on a table, or maybe put it in your pocket. He suggested attaching it to the GameBoy. At first I didn't think it would be useful, it's not like it used the cart slot3 for anything. Eventually I decided to try it, so I designed and printed a version of the back case shaped like a GameBoy cartridge cart.

The whole project was quite the learning experience, exploring lots of new concepts. If you're a very particular kind of nerd the kits are available for purchase.


  1. Sharing is, perhaps, a strong word. The embedded code can't import any dependencies, so there's still a lot of code duplication. However the files live inside a Swift package, so I can at least run tests.

  2. I'm just realizing I haven't posted any updates about that in over a year.

  3. I very much would like to try to design a version that's built into a GameBoy cart.

Far too long ago I announced a major new version of ScreeningRoom, thinking that its release was both just around the corner and that it was unquestionably better in all ways. I was wrong on both counts. I heard from beta testers that they were still using the previous version because the menu bar functionality was critical, even with some of the bugs.

After mulling things over I decided to bring the improvements from V2 back to the V1 app. What this means in practice is the latest update still lives in the menu bar, but the underlying preview framework is all new.

Bug Fixes & Improvements

One of the most important changes and fixes is to the crosshair. The positioning bug has been fixed, no longer will the crosshair run off of edge of the preview. Another change here is the crosshair always tracks the mouse, even when the previews are paused and the lines run across the entire preview area, not just the display the mouse is in. This makes locating the cursor even easier.

The preview windows have also been rebuilt from the ground up. The popover now has a toolbar at the bottom with playback controls for quickly starting and stopping all displays. You can create separate windows for each display from the Display icon, which also has individual playback controls. Each preview window has independent playback control, even for the same display.

New Features

There are also a couple of new features, that weren't even available in the V2 beta that pertain to how new windows behave.

The first is "Play streams by default", which does what it says on the tin. If you want to quickly see another screen you can now have the preview appear already playing the screens, no need to start them manually.

Second is an option to hide the main display by default. In most cases the main display is the one screen you can always see, so having it show up in the preview isn't terribly useful. This preference takes care of that.

Availability

The update is rolling out now and should show up automatically in the next day, and is also available to download from the website. If you haven't yet tried ScreeningRoom a full-featured 14-day trial is available.

In high school I loved playing a little Flash game called Planarity. I've absolutely no idea how I stumbled upon it, but I played it most days in the library before classes if I got to school early. In the intervening years between then and now it would occasionally cross my mind as a fun memory. In the years since I learned to code it's also occasionally crossed my mind, but with a more concrete notion: it would be fun to build an implementation of the game.

Last week it crossed my mind again, and this time I had an idea of how to actually build the game. I was wrong.

The algorithm behind Planarity is both quite simple, but also more involved than I'd thought, as a player of the game. The creator of the game, John Tantalo, was gracious enough to publish the algorithm in the public domain. The game isn't just a series of random vertices and lines, but rather built on an underlying assortment of intersecting lines, which in turn create the puzzle to solve.

I was able to reuse a lot of my work from SwiftGraphics which has most the basic geometry code needed for doing line intersections and such. The hardest part was figuring out how to build the actual graph from the original intersection points. I'm not sure if my implementation is the most efficient, but it does seem to work.

The entire app is built in SwiftUI, including the actual graph. It does take some work to convince SwiftUI to do absolute positioning the correct way1, but it provides some really nice options for higher level things like styling and interaction. Planarity is perfectly suited to being played on the iPad, especially with an Pencil.

There are still things to polish and features to add. For instance, I'd like to add a more comprehensive records section, keeping track of puzzle history like the fewest moves in a level to solve the graph and the total number of moves. It would also be fun to export an animation of the puzzle being solved2.

I would like to release the game when it's ready, even if no one else is interested in it. The app needs a little more polish, and a more original name. In the meantime I have graphs to untangle.

Update: I've done the polishing and the game is now available to beta test on TestFlight.


  1. .frame().position(). not .position().frame()

  2. Perfect for all of that social media clout.

It's a been a little while since the last Capture One Stream Deck plugin update, but it's worth the wait. The latest version packs in a lot of new features and some important bug fixes. The update is available now in the Elgato marketplace.

New

  • Color tag and rating key actions can now toggle their value
  • Flip Crop action
  • Select last n images for quick review
  • Toggle stored filters in the current collection

Changed

  • Actions require selecting a value, rather than falsely indicating a default
  • Open Collection now has a folder icon
  • Sort action can now sort by all properties
  • Sort action toggles between two user selected options

Fixed

  • Plugin more reliably connects to Capture One on first install
  • Dial actions show their titles when an image isn't selected

Adding Proper Metadata to Film Scans

06 Dec 2023 ∞

I have a fairly extensive1 collection of film cameras. Everything from Kodak to Leica to Hasselblad to Minox. Most of them work and are fun to shoot with. This, however, isn't a camera review2. It's instead about my great obsession with putting metadata where it belongs.

I use VueScan and a Nikon CoolScan to scan all the film I get back from the lab, the resulting images are great, the DAM situation isn't. I want as much information about the photo as possible, both in terms of scanned resolution but also the information about the photo. Film cameras, as it happens, aren't great at applying their own metadata.

In the past I've used Capturebot to apply camera, lens, and film stock metadata to my scans. This involved maintaining a spreadsheet with a complicated mix of of all that information by hand. The table looks a little something like this:

FilenameMakeModelLens MakeLens ModelFocal LengthFilmISO
CanonetCanonCanonetCanon40
HassVHasselblad501CMZeiss
RAP100FFuji RAP100F100
35GTMinox35 GTMinoxColor-Minotar f/2.835
P3200Kodak T-Max P32003200
RPXRollei RPX 100100

Even with all of that I didn't have all of the metadata combinations. I have, for instance, multiple lenses for the Hasselblad but only one line for all of them, so the specific lens is missing. I could of course add separate rows and filenames for each, but it just becomes a lot to manage.

About a month ago I'd just gotten some film back from the lab and had A Thought: what if I just built an entirely new application specifically for writing proper metadata to film scans? I told myself I wouldn't do it. Three days later I had a working proof-of-concept.

Take Stock main window

The aforementioned prototype quickly grew into a new app: Take Stock. Which, I'm going to mention now, is available in beta on TestFlight. I think Take Stock is the easiest way to embed proper EXIF metadata to your film scans, it's also a great way to keep tabs on your collection.

Take Stock attempts to add as much metadata as possible to each image. The obvious ones are camera & lens makes and models, ISO, focal length & aperture3. After scouring Exiftool's metadata tables, I also tried to include more esoteric tags, like the lens specification which gives the full zoom and aperture range of a lens. Anything that makes sense and I can find the appropriate tag for is fair game.

The core of Take Stock is your collection of cameras, lenses, and film stocks. You enter the details about your cameras, lenses, and the film you shoot which you can then select to embed in photos. You can also select that info ahead of time, which generates a filename you can give to your scans to automatically select the proper camera, lens, and film.

You can add individual images or entire folders (which update automatically) to Take Stock, then go through to double check the metadata or make adjustments to specific images. Once you're satisfied you export that metadata which is embedded into the image file. Any photo application that reads EXIF tags will read the new metadata.

As an added bonus you can also track which film is loaded in a camera, because not all cameras have an ISO setting, a place to slide a box flap, or even a window indicating whether film is loaded.

The entire application was built in SwiftUI with SwiftData as the model layer, which means it synchs via iCloud. I also massively improved my Exiftool wrapping library, SwiftEXIF, which powers all of the actual metadata writing.

If you shoot film and care about metadata (even a little) do check out the beta and let me know what you think.


  1. Sitting in my office chair I can count 22 without getting up.

  2. Though I have thought doing that would be fun.

  3. For fixed focal length or aperture lenses

A Future of Capture One for iPhone

29 Oct 2023 ∞

When Capture One announced they were releasing Capture One for iPhone a lot of digital techs rolled their eyes. Why would we want to tether to a phone? The marketing didn't offer a compelling reason, instead mostly featuring "influencers" with knotted up orange cables, seemingly another example of how Capture One had forgotten its core professional users.

While I'm sure there are situations where a photographer might find this useful, say out in the wilderness and wanting to pack light, in studio or on location with a crew it makes little sense. I don't think it's constructive to focus on the ways Capture One (the mobile app) isn't a fit for commercial workflows, of which there are many. Instead I'd like to explore how Capture One (the core technology) can be leveraged on a mobile device to help working digital techs.

Wireless tethering, as it exists today, is useless. RAW files are too big, Wi-Fi bandwidth is too low. There are solutions to transfer JPGs for preview but those aren't the real photos, so there's little point in organizing and adjusting them because you'll have to do it all again when you import the RAW files later. What you'd want is to only send a better preview image, not just a JPG, but something more representative of the RAW data. A file that you could organize and adjust: a proxy for the future RAW file. As it happens, Capture One already has such a file1, and with some changes to the way the iPhone app works it could offer a compelling wireless tethering solution.

The mobile app would act as a wireless bridge between camera and computer. It would advertise itself (and thus a connected camera2) to other Capture One session, where it would appear in the list of available cameras. The user can then select this remote camera the same way as any other camera connected via USB or wirelessly and be able to shoot as normal, with a few changes to the workflow.

When a photo is taken Capture One on the phone will render a proxy file and send that3 to the host session, keeping the RAW file either on the camera's card or downloading it to the phone as a normal tethered image. The comparatively small proxy files should be easy to send in real time as images come in. If the network go offline the phone can continue caching proxies to send them upon reconnection.

A digital tech's workflow largely stays the same: setting capture folders, naming files, adjusting images, tagging plates, etc. All of these operations would be done to the proxy files alone in a similar manor to working with offline images stored on an external drive. When it comes time to offload the images to the main session they'd be linked up to their proxies, copied into place, and the photos would come online.

The minimal app wouldn't need to do much besides render proxies, but there's no reason it couldn't also be more active in the session; after all modern iPhones are clearly up to the task of running Capture One, as evidenced by the fact they run Capture One. With a little bit of two-way communication the mobile app could reflect the adjustments made in the host session to apply next capture adjustments or adjustments made after the fact. The file organization could also be reflected, to help keep everyone on the same page.

Some of this could also serve as the core for a revamped Capture Pilot. With one session acting as a server many clients could connect, either as remote cameras or viewers. Leveraging the permission structure of Capture One Live these client sessions could have varying capabilities tailored for the needs of the specific user. A distributed Capture One would offer unparalleled flexibility for different types of workflows: an art director could make selects from a separate computer; clients can review images as they come in on an iPad; a food stylist could shoot from set; multi-set shoots could share a session, organizing files centrally.

There are a lot of interesting possibilities for how Capture One can grow to be a multi-platform application. New platforms need to enable those of us in the commercial photo industry to expand our workflows, not abandon them. The recent, greatly needed, improvements to Capture One Live are a fantastic example of how to enable new workflows that weren't possible before. Hopefully Capture One for iPhone can similarly open new doors in the future.


  1. The proxy file was even improved in 16.3

  2. Or the phone's camera, if that's your cup of tea.

  3. It could be worth having an option to send the RAW file, accepting the tradeoffs, for cases where you need it like when focus stacking.

False Color in Capture One

28 Sep 2023 ∞

False color test image

As I've been doing more motion work there are often aspects of that workflow I'd like to take advantage of in my stills workflow with Capture One. While not everything translates directly, there is one that does: false color.

False color maps values in an image into a gradient, usually with blue for darker colors, green for mid-tones, and red for highlights. There are many variations on how the colors map. Some profiles drop most values to greyscale only highlighting certain ranges, others use a continuous gradient, and everything in between.

There are lots of existing LUTs you can download or you can even create your own, but none of them matched the false color I'm used to: the FSI LUM coloring.

Creating a LUT

In order to match the full gradient, and particular color stops, I created a test chart. It had a gradient from black to white, plus specific values broken out every 10 and 5 percent. The basic gradient goes blue, teal, green, yellow, red. However each of those colors aren't evenly distributed. The green mid-tones takes up more space, and there's an interesting sharp gradient in the deep blues.

I set up a gradient map in Affinity Photo so I could tweak the colors visually. The teal and yellow were toned down a bit because they appear visually brighter than their surrounding colors. Once I was happy with the gradient I exported a LUT from Affinity Photo (which is a great feature and something I hope more photo software integrates).

False Color in Capture One

Capture One, unfortunately, does not have any support for LUTs, so my artisanal hand-crafted cube file isn't helpful. There's a command line tool, ocio, which can convert between different LUT formats, and also convert a LUT into an ICC profile. Capture One uses ICC profiles for everything: camera color profiling, working color space, processed image color, etc. The last one is what we'll take advantage of.

When creating a process recipe you can, of course, select the color space. Most of the time that will be Adobe RGB or sRGB, but it can be any profile, including our false color ICC profile. In order to use false color the ICC profile needs to be copied to ~/Library/ColorSync/Profiles, a quick restart of Capture One and the profile will show up in the list1.

Create a new False Color recipe, most of the settings don't matter, but select the false color profile. By toggling Recipe Proofing with the False Color recipe selected Capture One will render the image with that profile, or to put it another way: in false color.

Some Caveats

The first caveat is that using recipe proofing is kind of a work around. If you select a different recipe you'll get a different proofed profile.

The next is a quirk in how Capture One appears to render blown out highlights, possibly combined with an artefact of the LUT to ICC conversion. Portions of the image which are (255, 255, 255) don't render as the red from our gradient map, but instead as a light pink. Affinity Photo doesn't do this, which makes me think it's a C1 thing.

In Practice

False color is an incredibly useful tool, and it's something I hope will be integrated into stills. Maybe one day Capture One will add it in addition to the basic highlight and shadow warnings (they already have vectorscope-style color wheels). In the meantime you can download the profile and try it out for you next shoot.


  1. If Capture One doesn't read the profile it can also be copied to Contents/Frameworks/ImageProcessing.framework/Versions/A/Resources/Profiles/Output inside the app bundle, however this needs to be done for each version.

Last year I got a Flanders Scientific DM241, and was thrilled to find many of its functions could be controlled remotely. While often times the monitor is close as hand it could also be across the studio so being able to control things like false color from your computer is handy. Plus, it's fun to poke around the protocols, but more on that in a minute.

The easiest is by connecting the monitor to ethernet and using the IP Remote Utility. The app gives you control over all of the keys on the monitor and even lets you stream back the scopes. There's also a Stream Deck profile, which is handy. However it just uses keyboard shortcuts meaning you can't use it if another application is in the foreground.

The Stream Deck Plugin

The plugin is configured with the IP address of the monitor, and can even connect to multiple monitors. Stream Deck buttons can be assigned to any of the four monitor inputs or the six function keys. Pressing a button on the Stream Deck is just like walking over to the monitor and pressing a button there.

The plugin is entirely stand-alone. It connects directly to the monitor over the network, without the need for the IP Remote Utility to be installed nor running. The only requirement is that your monitor be reachable over the network.

The GPI Controller enclosure

The GPI Controller

Next to the LAN port is another RJ-45 labeled GPI and is the second way to control the monitor. The pins of the port are used as normally open switches; connect a pin to ground to control the assigned function. Aside from the GPI Keypad Lockout there don't seem to be many options for making use of the additional 7 functions the GPI has.

The GPI Controller1 is a small box, USB-C on one side and RJ-45 on the other, that does just that. The controller is powered by an RP2040 microcontroller and custom firmware providing a USB bridge to the GPI. The enclosure is 3D printed and everything is hand assembled.

The Stream Deck plugin connects to the GPI controller automatically and provides three different key options:

  1. Latching button
  2. Momentary button
  3. Multi-function button

The latching button is the most common and simply toggles a function on and off, e.g. false color or a tally light.

The momentary button is useful for stateless functions like powering the monitor on.

Finally the multi-function button allows you to combine multiple latching functions. A great use for this is creating a yellow tally, which is the combination of red and green.

Availability

The Stream Deck plugin is going to be submitted to the store and will be available for as a free download, although supporting development through Ko-Fi is greatly appreciated.

The design of the GPI controller is just about finished, so I'm getting ready to do a small production batch. I'm hoping to gauge interest early so I know how many to make.

If you'd like to be one of the first to try them out please reach out.


  1. To see the controller in action check out the little promo I made for it.

Digitial techs spend a lot of time organizing files; it's our primary job. We create folders for every shot, use token when processing to organize files automatically, and then trash files into a single folder. Scrolling through thousands of files to find a missing frame isn't exactly fun.

To help solve this I've written a script that adds some order to your session's trash. Running the script trashes the selected variants in an orderly way, think of it as using the Image Folder Name token. Files are sorted into matching folders in the session trash, matching your capture folder.

Need to find an image? No more hunting around, just browse to the shot folder and you're off to the races.

Details and Caveats

When trashing images all in the same favorite the script is optimized to do everything as quickly as possible and takes advantage of batch operations. If trashing from an album where you can see images from multiple folders the script has an alternate mode that isn't as optimized, but is still plenty fast.

The script only mirrors the image folder name, not the entire hierarchy in the session. This might be something I'll add in the future if there's a need for it. If you use more complex session structures let me know.

Capture One famously doesn't handle subfolders well, and the Trash is no exception. When using the script you'll have to browse to the subfolders manually.

Downloads

If you find yourself wishing for a more organized Trash Experience go download the script and take a look at the rest of the collection. If you find what I do helpful consider supporting my work through either a one-time donation or by becoming a member.