Far too long ago I announced a major new version of ScreeningRoom, thinking that its release was both just around the corner and that it was unquestionably better in all ways. I was wrong on both counts. I heard from beta testers that they were still using the previous version because the menu bar functionality was critical, even with some of the bugs.

After mulling things over I decided to bring the improvements from V2 back to the V1 app. What this means in practice is the latest update still lives in the menu bar, but the underlying preview framework is all new.

Bug Fixes & Improvements

One of the most important changes and fixes is to the crosshair. The positioning bug has been fixed, no longer will the crosshair run off of edge of the preview. Another change here is the crosshair always tracks the mouse, even when the previews are paused and the lines run across the entire preview area, not just the display the mouse is in. This makes locating the cursor even easier.

The preview windows have also been rebuilt from the ground up. The popover now has a toolbar at the bottom with playback controls for quickly starting and stopping all displays. You can create separate windows for each display from the Display icon, which also has individual playback controls. Each preview window has independent playback control, even for the same display.

New Features

There are also a couple of new features, that weren't even available in the V2 beta that pertain to how new windows behave.

The first is "Play streams by default", which does what it says on the tin. If you want to quickly see another screen you can now have the preview appear already playing the screens, no need to start them manually.

Second is an option to hide the main display by default. In most cases the main display is the one screen you can always see, so having it show up in the preview isn't terribly useful. This preference takes care of that.


The update is rolling out now and should show up automatically in the next day, and is also available to download from the website. If you haven't yet tried ScreeningRoom a full-featured 14-day trial is available.

In high school I loved playing a little Flash game called Planarity. I've absolutely no idea how I stumbled upon it, but I played it most days in the library before classes if I got to school early. In the intervening years between then and now it would occasionally cross my mind as a fun memory. In the years since I learned to code it's also occasionally crossed my mind, but with a more concrete notion: it would be fun to build an implementation of the game.

Last week it crossed my mind again, and this time I had an idea of how to actually build the game. I was wrong.

The algorithm behind Planarity is both quite simple, but also more involved than I'd thought, as a player of the game. The creator of the game, John Tantalo, was gracious enough to publish the algorithm in the public domain. The game isn't just a series of random vertices and lines, but rather built on an underlying assortment of intersecting lines, which in turn create the puzzle to solve.

I was able to reuse a lot of my work from SwiftGraphics which has most the basic geometry code needed for doing line intersections and such. The hardest part was figuring out how to build the actual graph from the original intersection points. I'm not sure if my implementation is the most efficient, but it does seem to work.

The entire app is built in SwiftUI, including the actual graph. It does take some work to convince SwiftUI to do absolute positioning the correct way1, but it provides some really nice options for higher level things like styling and interaction. Planarity is perfectly suited to being played on the iPad, especially with an Pencil.

There are still things to polish and features to add. For instance, I'd like to add a more comprehensive records section, keeping track of puzzle history like the fewest moves in a level to solve the graph and the total number of moves. It would also be fun to export an animation of the puzzle being solved2.

I would like to release the game when it's ready, even if no one else is interested in it. The app needs a little more polish, and a more original name. In the meantime I have graphs to untangle.

Update: I've done the polishing and the game is now available to beta test on TestFlight.

  1. .frame().position(). not .position().frame()

  2. Perfect for all of that social media clout.

It's a been a little while since the last Capture One Stream Deck plugin update, but it's worth the wait. The latest version packs in a lot of new features and some important bug fixes. The update is available now in the Elgato marketplace.


  • Color tag and rating key actions can now toggle their value
  • Flip Crop action
  • Select last n images for quick review
  • Toggle stored filters in the current collection


  • Actions require selecting a value, rather than falsely indicating a default
  • Open Collection now has a folder icon
  • Sort action can now sort by all properties
  • Sort action toggles between two user selected options


  • Plugin more reliably connects to Capture One on first install
  • Dial actions show their titles when an image isn't selected

Adding Proper Metadata to Film Scans

06 Dec 2023 ∞

I have a fairly extensive1 collection of film cameras. Everything from Kodak to Leica to Hasselblad to Minox. Most of them work and are fun to shoot with. This, however, isn't a camera review2. It's instead about my great obsession with putting metadata where it belongs.

I use VueScan and a Nikon CoolScan to scan all the film I get back from the lab, the resulting images are great, the DAM situation isn't. I want as much information about the photo as possible, both in terms of scanned resolution but also the information about the photo. Film cameras, as it happens, aren't great at applying their own metadata.

In the past I've used Capturebot to apply camera, lens, and film stock metadata to my scans. This involved maintaining a spreadsheet with a complicated mix of of all that information by hand. The table looks a little something like this:

FilenameMakeModelLens MakeLens ModelFocal LengthFilmISO
RAP100FFuji RAP100F100
35GTMinox35 GTMinoxColor-Minotar f/2.835
P3200Kodak T-Max P32003200
RPXRollei RPX 100100

Even with all of that I didn't have all of the metadata combinations. I have, for instance, multiple lenses for the Hasselblad but only one line for all of them, so the specific lens is missing. I could of course add separate rows and filenames for each, but it just becomes a lot to manage.

About a month ago I'd just gotten some film back from the lab and had A Thought: what if I just built an entirely new application specifically for writing proper metadata to film scans? I told myself I wouldn't do it. Three days later I had a working proof-of-concept.

Take Stock main window

The aforementioned prototype quickly grew into a new app: Take Stock. Which, I'm going to mention now, is available in beta on TestFlight. I think Take Stock is the easiest way to embed proper EXIF metadata to your film scans, it's also a great way to keep tabs on your collection.

Take Stock attempts to add as much metadata as possible to each image. The obvious ones are camera & lens makes and models, ISO, focal length & aperture3. After scouring Exiftool's metadata tables, I also tried to include more esoteric tags, like the lens specification which gives the full zoom and aperture range of a lens. Anything that makes sense and I can find the appropriate tag for is fair game.

The core of Take Stock is your collection of cameras, lenses, and film stocks. You enter the details about your cameras, lenses, and the film you shoot which you can then select to embed in photos. You can also select that info ahead of time, which generates a filename you can give to your scans to automatically select the proper camera, lens, and film.

You can add individual images or entire folders (which update automatically) to Take Stock, then go through to double check the metadata or make adjustments to specific images. Once you're satisfied you export that metadata which is embedded into the image file. Any photo application that reads EXIF tags will read the new metadata.

As an added bonus you can also track which film is loaded in a camera, because not all cameras have an ISO setting, a place to slide a box flap, or even a window indicating whether film is loaded.

The entire application was built in SwiftUI with SwiftData as the model layer, which means it synchs via iCloud. I also massively improved my Exiftool wrapping library, SwiftEXIF, which powers all of the actual metadata writing.

If you shoot film and care about metadata (even a little) do check out the beta and let me know what you think.

  1. Sitting in my office chair I can count 22 without getting up.

  2. Though I have thought doing that would be fun.

  3. For fixed focal length or aperture lenses

A Future of Capture One for iPhone

29 Oct 2023 ∞

When Capture One announced they were releasing Capture One for iPhone a lot of digital techs rolled their eyes. Why would we want to tether to a phone? The marketing didn't offer a compelling reason, instead mostly featuring "influencers" with knotted up orange cables, seemingly another example of how Capture One had forgotten its core professional users.

While I'm sure there are situations where a photographer might find this useful, say out in the wilderness and wanting to pack light, in studio or on location with a crew it makes little sense. I don't think it's constructive to focus on the ways Capture One (the mobile app) isn't a fit for commercial workflows, of which there are many. Instead I'd like to explore how Capture One (the core technology) can be leveraged on a mobile device to help working digital techs.

Wireless tethering, as it exists today, is useless. RAW files are too big, Wi-Fi bandwidth is too low. There are solutions to transfer JPGs for preview but those aren't the real photos, so there's little point in organizing and adjusting them because you'll have to do it all again when you import the RAW files later. What you'd want is to only send a better preview image, not just a JPG, but something more representative of the RAW data. A file that you could organize and adjust: a proxy for the future RAW file. As it happens, Capture One already has such a file1, and with some changes to the way the iPhone app works it could offer a compelling wireless tethering solution.

The mobile app would act as a wireless bridge between camera and computer. It would advertise itself (and thus a connected camera2) to other Capture One session, where it would appear in the list of available cameras. The user can then select this remote camera the same way as any other camera connected via USB or wirelessly and be able to shoot as normal, with a few changes to the workflow.

When a photo is taken Capture One on the phone will render a proxy file and send that3 to the host session, keeping the RAW file either on the camera's card or downloading it to the phone as a normal tethered image. The comparatively small proxy files should be easy to send in real time as images come in. If the network go offline the phone can continue caching proxies to send them upon reconnection.

A digital tech's workflow largely stays the same: setting capture folders, naming files, adjusting images, tagging plates, etc. All of these operations would be done to the proxy files alone in a similar manor to working with offline images stored on an external drive. When it comes time to offload the images to the main session they'd be linked up to their proxies, copied into place, and the photos would come online.

The minimal app wouldn't need to do much besides render proxies, but there's no reason it couldn't also be more active in the session; after all modern iPhones are clearly up to the task of running Capture One, as evidenced by the fact they run Capture One. With a little bit of two-way communication the mobile app could reflect the adjustments made in the host session to apply next capture adjustments or adjustments made after the fact. The file organization could also be reflected, to help keep everyone on the same page.

Some of this could also serve as the core for a revamped Capture Pilot. With one session acting as a server many clients could connect, either as remote cameras or viewers. Leveraging the permission structure of Capture One Live these client sessions could have varying capabilities tailored for the needs of the specific user. A distributed Capture One would offer unparalleled flexibility for different types of workflows: an art director could make selects from a separate computer; clients can review images as they come in on an iPad; a food stylist could shoot from set; multi-set shoots could share a session, organizing files centrally.

There are a lot of interesting possibilities for how Capture One can grow to be a multi-platform application. New platforms need to enable those of us in the commercial photo industry to expand our workflows, not abandon them. The recent, greatly needed, improvements to Capture One Live are a fantastic example of how to enable new workflows that weren't possible before. Hopefully Capture One for iPhone can similarly open new doors in the future.

  1. The proxy file was even improved in 16.3

  2. Or the phone's camera, if that's your cup of tea.

  3. It could be worth having an option to send the RAW file, accepting the tradeoffs, for cases where you need it like when focus stacking.

False Color in Capture One

28 Sep 2023 ∞

False color test image

As I've been doing more motion work there are often aspects of that workflow I'd like to take advantage of in my stills workflow with Capture One. While not everything translates directly, there is one that does: false color.

False color maps values in an image into a gradient, usually with blue for darker colors, green for mid-tones, and red for highlights. There are many variations on how the colors map. Some profiles drop most values to greyscale only highlighting certain ranges, others use a continuous gradient, and everything in between.

There are lots of existing LUTs you can download or you can even create your own, but none of them matched the false color I'm used to: the FSI LUM coloring.

Creating a LUT

In order to match the full gradient, and particular color stops, I created a test chart. It had a gradient from black to white, plus specific values broken out every 10 and 5 percent. The basic gradient goes blue, teal, green, yellow, red. However each of those colors aren't evenly distributed. The green mid-tones takes up more space, and there's an interesting sharp gradient in the deep blues.

I set up a gradient map in Affinity Photo so I could tweak the colors visually. The teal and yellow were toned down a bit because they appear visually brighter than their surrounding colors. Once I was happy with the gradient I exported a LUT from Affinity Photo (which is a great feature and something I hope more photo software integrates).

False Color in Capture One

Capture One, unfortunately, does not have any support for LUTs, so my artisanal hand-crafted cube file isn't helpful. There's a command line tool, ocio, which can convert between different LUT formats, and also convert a LUT into an ICC profile. Capture One uses ICC profiles for everything: camera color profiling, working color space, processed image color, etc. The last one is what we'll take advantage of.

When creating a process recipe you can, of course, select the color space. Most of the time that will be Adobe RGB or sRGB, but it can be any profile, including our false color ICC profile. In order to use false color the ICC profile needs to be copied to ~/Library/ColorSync/Profiles, a quick restart of Capture One and the profile will show up in the list1.

Create a new False Color recipe, most of the settings don't matter, but select the false color profile. By toggling Recipe Proofing with the False Color recipe selected Capture One will render the image with that profile, or to put it another way: in false color.

Some Caveats

The first caveat is that using recipe proofing is kind of a work around. If you select a different recipe you'll get a different proofed profile.

The next is a quirk in how Capture One appears to render blown out highlights, possibly combined with an artefact of the LUT to ICC conversion. Portions of the image which are (255, 255, 255) don't render as the red from our gradient map, but instead as a light pink. Affinity Photo doesn't do this, which makes me think it's a C1 thing.

In Practice

False color is an incredibly useful tool, and it's something I hope will be integrated into stills. Maybe one day Capture One will add it in addition to the basic highlight and shadow warnings (they already have vectorscope-style color wheels). In the meantime you can download the profile and try it out for you next shoot.

  1. If Capture One doesn't read the profile it can also be copied to Contents/Frameworks/ImageProcessing.framework/Versions/A/Resources/Profiles/Output inside the app bundle, however this needs to be done for each version.

Last year I got a Flanders Scientific DM241, and was thrilled to find many of its functions could be controlled remotely. While often times the monitor is close as hand it could also be across the studio so being able to control things like false color from your computer is handy. Plus, it's fun to poke around the protocols, but more on that in a minute.

The easiest is by connecting the monitor to ethernet and using the IP Remote Utility. The app gives you control over all of the keys on the monitor and even lets you stream back the scopes. There's also a Stream Deck profile, which is handy. However it just uses keyboard shortcuts meaning you can't use it if another application is in the foreground.

The Stream Deck Plugin

The plugin is configured with the IP address of the monitor, and can even connect to multiple monitors. Stream Deck buttons can be assigned to any of the four monitor inputs or the six function keys. Pressing a button on the Stream Deck is just like walking over to the monitor and pressing a button there.

The plugin is entirely stand-alone. It connects directly to the monitor over the network, without the need for the IP Remote Utility to be installed nor running. The only requirement is that your monitor be reachable over the network.

The GPI Controller enclosure

The GPI Controller

Next to the LAN port is another RJ-45 labeled GPI and is the second way to control the monitor. The pins of the port are used as normally open switches; connect a pin to ground to control the assigned function. Aside from the GPI Keypad Lockout there don't seem to be many options for making use of the additional 7 functions the GPI has.

The GPI Controller1 is a small box, USB-C on one side and RJ-45 on the other, that does just that. The controller is powered by an RP2040 microcontroller and custom firmware providing a USB bridge to the GPI. The enclosure is 3D printed and everything is hand assembled.

The Stream Deck plugin connects to the GPI controller automatically and provides three different key options:

  1. Latching button
  2. Momentary button
  3. Multi-function button

The latching button is the most common and simply toggles a function on and off, e.g. false color or a tally light.

The momentary button is useful for stateless functions like powering the monitor on.

Finally the multi-function button allows you to combine multiple latching functions. A great use for this is creating a yellow tally, which is the combination of red and green.


The Stream Deck plugin is going to be submitted to the store and will be available for as a free download, although supporting development through Ko-Fi is greatly appreciated.

The design of the GPI controller is just about finished, so I'm getting ready to do a small production batch. I'm hoping to gauge interest early so I know how many to make.

If you'd like to be one of the first to try them out please reach out.

  1. To see the controller in action check out the little promo I made for it.

Digitial techs spend a lot of time organizing files; it's our primary job. We create folders for every shot, use token when processing to organize files automatically, and then trash files into a single folder. Scrolling through thousands of files to find a missing frame isn't exactly fun.

To help solve this I've written a script that adds some order to your session's trash. Running the script trashes the selected variants in an orderly way, think of it as using the Image Folder Name token. Files are sorted into matching folders in the session trash, matching your capture folder.

Need to find an image? No more hunting around, just browse to the shot folder and you're off to the races.

Details and Caveats

When trashing images all in the same favorite the script is optimized to do everything as quickly as possible and takes advantage of batch operations. If trashing from an album where you can see images from multiple folders the script has an alternate mode that isn't as optimized, but is still plenty fast.

The script only mirrors the image folder name, not the entire hierarchy in the session. This might be something I'll add in the future if there's a need for it. If you use more complex session structures let me know.

Capture One famously doesn't handle subfolders well, and the Trash is no exception. When using the script you'll have to browse to the subfolders manually.


If you find yourself wishing for a more organized Trash Experience go download the script and take a look at the rest of the collection. If you find what I do helpful consider supporting my work through either a one-time donation or by becoming a member.


Guidelines got its first update of the year, and its biggest update to date, yesterday. Major parts of the user interface were updated as well as major changes to what you can do with frame lines.

The update is a available in-app and can be downloaded from the project page.


Your frame lines can now do more than just frame up, they can help you compose your shot with the addition of grids. Grids can be created in two ways:

  1. Pixel grids, creating grid lines at the specified pixel spacing
  2. Divided, creating evenly spaced grid lines

The grid lines follow the existing frame line styling and can be linked to create a unified grid.

Background Fill

Both frame lines, and the overlay itself, can be given background colors allowing you to add visual emphasis to the parts of your image that are most important. With nested frame lines complex overlays can be created to help guide your shot.

Updated Interface

The frame line controls have been moved out of a popover and into a top-level tab, giving persistent access to key adjustments. With the introduction of opacity throughout the overlay the preview has also been given a Very Fancy checkerboard pattern found in graphics editors the world over.

One of the primary jobs of a DT, especially on a high volume set, is managing images as they come in. This usually relies on setting the capture folder and some some basic tokens for setting the name. If the tech gets behind (or the photographer gets ahead) it’s easy to have images mis-filed requiring renaming, which, depending on the naming convention, can be complicated or error prone.

Current capture naming is just file naming, and thus once a name is set that’s what it is, no matter where that file is (or is supposed to be). Take the venerable <Destination Folder Name> token, which adds the name of the folder to the filename, one the most common naming schemes out there. However, when moving a file the name is no longer correct. Renaming is easy, but now you have to contend with file counters, other tokens, or plate info that’s been manually added.

However, if the filename wasn't a fixed piece of information and is instead based on metadata the name would always be correct.


Using tokens not just for initial naming and renaming would mean the filename would always match the state of the metadata. So for instance, the naming scheme <Image Folder Name>_<Capture Counter> would always match the current image folder and global capture counter. Renaming a file would be as simple as moving it. Remembering to rename and set the counter would no longer be necessary.

Live tokens would also make it easy to remap metadata values for a given token. For instance mapping color tags used for tagging shot types (hero, grey card, plate) could be done automatically, and importantly, consistently. Similarly it’s common to add info about a plate to the filename (LT, DK, glow, empty set), which is really metadata that is traditionally captured as part of the filename.

In some cases the filename needs to hold a lot of information: brand, SKU, color, product name, select state, plate type, etc. Usually much of this is predefined and shouldn't be changed. Storing that information as metadata and rendering it with tokens would make it harder to accidentally change. For a theoretical tethered capture application perhaps the "filename" text field would actually be a tiny single-field metadata editor for only the data needed.

Fixing bad naming would be trivial. Perhaps on day one the client didn't care much about the naming, but on day two they've found a new appreciation for the art (of file naming, that is). For large shoots batch renaming can be a surprisingly slow process, and again complex if needing to keep some parts of the name in tact. With everything in metadata renaming the shoot is as simple as reordering the tokens.

A final benefit is the option of having multiple names for an image. The DAM can have one name, whereas the designers can have a shorter one, all tied together by an image ID. Or in the case of fixing naming, keeping the original as a reference while going forward with the new scheme.


One of the most important pieces of information is the most easily changed: the file counter. Some workflows want a single continuous counter for the whole shoot, others by shot. What happens if you import additional images from card or rename images? Current software treats all of these counters as separate, much to the chagrin of many a digital tech.

Computers, and I believe this to be an uncontroversial statement, are very good at counting. There isn't a reason for us humans to have to keep track of so many different numbers. There are a number of different counter tokens I believe would be useful.

  • A "Global Capture Counter". This one simply counts up, and could be thought of as a unique ID for an image.
  • A "Sorted Counter". Pick your sort type, typically date, and it gives an index into that list. Importing images would allow them to be easily sorted correctly.
  • A "Sorted Folder Counter". The same as above, but on a per-folder level. Adding or removing images wouldn't require renaming.

Wrapping Up

Given the importance of correctly named images I believe we need to change how we treat them. In much the same way as "ImportantDoc-V1", "ImportantDoc-Final", "ImportantDoc-Final-Final" isn't the best backup method, relying solely on the filename, without leveraging the abundance of metadata available, isn't the best way to store critical information about the contents of a file.