For the past few months I've been working on a brand new app to help digital techs on set. Capturebot 2 is designed to help digital techs manage their sessions by building a shotlist and tracking files ensuring that everything is named and processed correctly.

The History

The premise behind Capturebot is something I, and others like Brian von Glahn, have been messing with for a while. I'd built scripts and small command line tools to traverse the file system matching RAW file and output file together, but they were always a little clunky. I'd get something working on my computer, send it to a friend to test, only to find they set their sessions up differently, so the tool didn't work. Each time I shelved the idea because the scope was too large.

The work on Capturebot really began when I got the dreaded call after a shoot asking about some files. I checked my backup only to discover that, indeed, at the end of a week-long shoot a couple shots were missing output. I used that session as a standard test when building Capturebot.

What Capturebot Does

Capturebot, at its core, searches for RAW files and output files in order to find errors that I'm sure every digital tech has encountered at some point. It's designed to be flexible enough for any workflow and fast enough that Capturebot can be used throughout a shoot to help catch possible issues early.

The entire system is built on a powerful token-based file matching system which scans your session for RAW files and assigns them to shots. It can also be used to automatically build your shotlist as you go, for those shoots where you don't have a pre-built shotlist. Any files that are in the wrong folder or have naming errors will be flagged foe easy review.

Capturebot can also read metadata from your RAW files. This is particularly useful in the cases where an art director needs to make selects on set as you can easily see what shots have tagged or rated images, and how many of each. The metadata can be read from Capture One settings files, XMP sidecars, and even directly from inside an EIP.

By leveraging the same token engine Capturebot can quickly identify any RAW files that are missing output. For each process recipe you define a corresponding output pattern which tells Capturebot exactly where to expect to find the file. Any missing files are shown in a list so you don't have to hunt for them.

Finally Capturebot can generate reports in a variety of formats for your session. The reports can be shared with the photographer, client, producer, or anyone else as a way to help verify that everything is as it should be. You can make reports for the whole session or for individual shots as needed.



Where to Get It

Capturebot has been in an extended beta by some of the top techs in the industry and is now available for download. There is a fully-featured two week trial available and it can be purchased for $99, which includes a full year of updates. Ongoing updates are available with a $49/year subscription.

The updates subscription is, of course, optional, and importantly you will always keep what you've paid for.

I have big plans for Capturebot with a long list of features to add like shot status tracking, metadata import and writing, customizable reports, rule-based output matching, additional summaries, and more.

Give Capturebot a try, read the documentation, and let me know how it fits into your workflow.

The first app update of the year goes to First Contact with a minor update bringing enhanced menu bar functionality, some bug fixes, and support for images with an alpha channel.

Main Features

First Contact now has a brand new Page menu, which has a suite of commands for changing the page layout. The header, footer, and image labels can be toggled from the menu bar, or with keyboard shortcuts, as can the grid's rows and columns.

The biggest addition is the Match Fonts commands, which will sync the font family, size, and color between the header, footer, and image labels. This makes updating the font selection in a layout faster as each group no longer needs to be set manually.

Extras

  • Images with transparency, like screenshots, are now rendered correctly, with the page color showing through the image
  • The metric page presets have been recalculated to provide accurate sizing in millimeters and the A4 page defaults to landscape like the other page sizes
  • All images can be removed from the Edit menu, and when removing the last image the PDF size estimate correctly reflects that fact
  • You can also remove an image by right-clicking on it in the PDF preview
  • Files and folders can be dropped into the PDF preview area, not just the sidebar
  • The app now uses a custom accent color, matching the rocket in the icon

The update is available now via the in-app updater or by downloading directly. As always there is a two week trial available.

The latest version of First Contact, which is out now, has been in development for a over a year, and represents a nearly complete rewrite of the app from the ground up. The original contact sheet rendering engine has been scrapped, replaced with a brand new one that renders faster and produces smaller, higher quality PDFs. The layout settings have been expanded with the of addition headers & footers, tokens, and more.

The Rendering Engine

First Contact's new rendering engine provides instant feedback to layout changes. Updating the number of rows, changing a font, or adding new images no longer require rendering the contact sheet. Previewing images and estimating the PDF size now take full advantage of multi-core CPUs so the app stays responsive. Additionally it's more flexible allowing for a stronger foundation for future updates.

Rendering contact sheets isn't just faster, but also higher quality. The rebuilt Compressor can render higher quality images at a fraction of the file size, even when embedding full resolution images. For the times file size is of upmost important images can be resampled to create the smallest possible PDFs suitable for emailing.

Layout Options

First Contact now includes one of the most requested features: headers and footers.

Both have leading, centered, and trailing components and can be styled separately with different font families, sizes, and colors. Add tokens with dynamic information the current date, image count, page count & number, and PDF metadata, and more. Tokens, when combined with saved layouts, allow you to create templates for your clients or studio for consistent contact sheets.

Image labels also have the same styling options, and the small, but important addition of optionally excluding the file extension.

Availability

The update is available to download now and through the in-app updater. This release of First Contact also introduces a new licensing system which allows for easier license management from within the app. If you're an existing customer a new license code is being emailed to you. If you have any questions please reach out.

A new update for the Capture One Stream Deck plugin just hit the marketplace featuring six new actions and over 20 new image adjustments available on the Stream Deck + dials. To date the plugin has been downloaded over 1,800 times!

The plugin has nearly 30 unique actions, for doing everything from rating & tagging images, filtering collections, navigating images, moving the capture folder, changing camera settings, adjusting images, toggling the overlay & grid, and more. Many perform actions that can't be natively done in Capture One building on top of a rich collection of custom scripts.

Developing the plugin takes a lot of time and I'm glad so many people have found it useful. If you'd like to help support ongoing development consider donating on Ko-fi and tagging me in posts with your Stream Deck setups.

New Actions

  • Crop To Overlay: apply a crop that matches the overlay position
  • Set the current folder as the capture folder
  • Resume Counter: resume the capture counter from the latest image in a folder
  • Next Capture Orientation: set the camera orientation
  • Rotate images with keys and dials

Changed

  • Adjust camera settings with the Stream Deck + dials
  • Color Tag and Rating actions use macOS standard toggle behavior
  • Over 20 additional adjustments can be made from the Stream Deck + dials

I've had a long term obsession with 3D printing a medium format camera. It was one of the reasons I bought my very first 3D printer, and has been a project I've returned to several times. The last time I took at stab at the project was with my wooden 6x9 field camera, which was eventually upgraded with a sliding back that I used to do some studio work.

All of the previous attempts were cumbersome. The first version required taking the entire digital back off in order to focus and compose. The sliding back version pushed the lens so far back into the camera it was difficult to use and lost most of its movements. I really wanted something simpler.

So I designed something simpler.

The Camera

The simplest camera is just a box with an aperture and a sensing medium. In practice that means a pinhole camera, and those are cool, but I have a really nice 65mm Super-Angulon lens. I hastily put together an MVC1. On the front it has a hole for the Compur 00 shutter and on the back a spot to fit the Mamiya 645 adapter I've been using on all these projects.

The design is simple, not much more than a blend from a circle at the front to a square at the back. I added an Arca-Swiss tripod mount at the bottom for convenience. Most of the actual work was figuring out the best way to have be able to print the camera body in one piece, to minimize potential light leaks and just to be easier to print.

Of course there are a lot of careful measurements than go into photography and especially camera design. I ignored all of them. I had a 65mm lens, so why not make a rash assumption and make the camera 65mm long? I wasn't planning on using this in a studio, so fixed focus at infinity would be fine. The results are, surprisingly, not bad, if a tad soft2. Unfortunately I forgot to take into account the extra distance the adapter added; as a result I the focus was set to about four feet, which some back of the envelope math indicates is not infinity.

Not yet realizing the focus issue, I took the camera out on a couple of short trips, first around the block, and then for a hike in the woods. The initial walk proved difficult, with half the frames being completely black. There were a few redeeming photos, like this accidental quadruple exposure, which I would love to know how to do purposefully.

The second outing proved to be much better. I used a tripod this time, which helped (especially because the exposure times were quite long in the forrest). I think the camera being in direct sunlight may also contribute to miss-fires, though I can't be certain. There are several internal light baffles around the digital back, so there shouldn't be any extraneous light making its way in, but there were fewer problems in the dappled shade of the forrest than on the sidewalk by my building.

I'm really pleased with the photos overall. It's fun to take photos and not really know how they're going to turn out. The digital back doesn't have live view, so you're truly just pointing and shooting. You can review the photo after you've shot it, but the quality and brightness of the screen really only give you the impression of the photo. I'm working on a new iteration backed by some actual math for calculating the correct focus distance. I might add a cold shoe to the top for accessories, but maybe that's taking away from the original design goal.


  1. Minimal Viable Camera. The design was hasty, the print was not.

  2. Note to self: look into hyperfocal distance

Looking back over what I’ve (mostly privately) written about XOXO I’m surprised to find a clear arc. The ethos of the festival has both tracked with and helped guide my return freelancing. Now that the final XOXO is over I want to gather my thoughts on my three times attending and how they've helped me over the years.


In 2018, at my first XOXO, I had a strong sense of not belonging. At the time it was the biggest version of festival and I felt like I’d been lucky to have somehow snuck into the party with the cool kids. I had a full time job and didn’t really make much online. Attending was aspirational. I’m not even sure I wrote an #intro post that year. The whole thing was overwhelming, in a good way.

In retrospect I found it a deeply moving experience. In full compliance with the XOXO Dream™️ it was one of the catalyzing moments for returning to the freelance world. XOXO also helped show me that I wasn’t alone, everyone, speakers included, felt imposter syndrome to some degree. The festival was really the first time I'd met other very online folks in person.


I left my full time job in June of 2019 and promptly went back to Portland for my second XOXO. There's an optimism throughout the whole event that's impossible not to get swept up in1, I'd missed the energy from the previous trip. That year I wrote an #intro, and had a few projects I'd begun working on for both clients and myself.

spongebob meme: five years later

During the intervening years I started many new projects, most of which I'm still working on. I wrote several applications, created pen plotter art, modeled in CAD for 3D printing, designed PCBs for electronics projects, and more. A lot of these projects were connected to my "real" profession as a digital tech, but others (read: the plotter art) were just for the sake of trying something new.


In short, I felt I was ready for XOXO this year. The community on Slack was a sounding board and support group, especially during the pandemic. The idea of returning to Portland, one more time, felt like going home2.

This year I think I was, for the first time, able to put most of my imposter syndrome aside. I was reunited with friends from previous years, and not just ‘met’ new people but I think made even more friends; people I’m going to follow and advocate for online as much as I can. It was nice to be around all of the folks from the internet again.

The other thing, and this is the self-serving part of the post, is that people recognized me from the internet. It’s one of the things I’d written about from my first XOXO, that everyone seemed to be a notable person online. I don’t have ‘an audience’ but to be recognized for your work is always flattering, and for me was a little bewildering.

I think the reason is that I've become much more willing to share my projects, and it’s because I was nurtured by one of the most supportive communities around that I had the confidence to publish my projects and to be proud of them. To have the support to find my niche and voice online. XOXO, in a sense, found me when I needed it.

Something I've taken away this year, especially after watching so many of the Art & Code talks is wanting to make more things that aren't necessarily practical. I'd like to work on projects that explore a concept, teach me something new, or are just fun. I'm not sure what those projects might be, but I'm looking forward to finding out.


  1. One could, without the benefit of hindsight, say it was infectious.

  2. For multiple reasons, Portland will always be home.

All the way back in May I got a bad idea: Capture One for the GameBoy.

Obviously not all of Capture One, but some sort of controller. While bored one evening I built a very non-functional proof-of-concept GameBoy game with GB Studio demonstrating what the features of the "game" would be. In short it would allow you to add color tags and star ratings to images. As a bonus I wanted the whole system to be (optionally) wireless.

A couple of rabbit holes later exploring GameBoy link cable communication I'd discovered that it uses SPI, which is easy to work with. Sending data via the GameBoy link port isn't actually all that difficult. GB Studio has an easy drag-and-drop node you can use to transmit (and receive) data. Unfortunately the initial tests weren't great. GB Studio, as a high-level programming environment expects a specific format (and response) for the transmitted data, whereas all I wanted to do was send a couple of bytes of information. I tried over and over to figure out how to make GB Studio happy, but in the end never could get it working reliably.

Amazingly there's an entire SDK for the GameBoy: GBDK. It turned out not to be all that much work to get the project set up, and having written C++ for the Raspberry Pi Pico before really helped. The game is fairly simple, it sits in a loop waiting for input and teleports the "player" between positions. With reliable link communication ready the next step was doing something with that data.

In the "game" you play as the cursor, navigating the wide world of Capture One. The system lets you change image selection with up and down, set color tags and rating with Ⓐ and clear the tag or rating with Ⓑ. When you press a button the game sends the data over the link cable to whatever's on the other side.

The adapter does two things: listen for data, either from the link cable or the radio, and pass that onto the computer via USB. Coincidentally Apple had just officially announced embedded Swift, so I figured this would be a fun project to test out programming for the RP2040 in something other than C++. After a little bit of messing around with the makefile for building the binary I was even able to share1 some code between the firmware and client library.

I decided to reuse a lot of the communication protocol I'd built for the GPI Controller2, and even added the link adapter into ControlRoom.

The final piece of the puzzle was the hardware. I designed a simple PCB for the Tiny2040, nRF24L01 radio, and definitely official GameBoy link port. All of the components are easy to solder by hand, so I put together a little kit that incudes all of the electronics and a couple of 3D printed panels.

A friend, and fellow digital tech, had asked where you'd hold the whole assembly when using it wirelessly. I'd been thinking you'd leave it on a table, or maybe put it in your pocket. He suggested attaching it to the GameBoy. At first I didn't think it would be useful, it's not like it used the cart slot3 for anything. Eventually I decided to try it, so I designed and printed a version of the back case shaped like a GameBoy cartridge cart.

The whole project was quite the learning experience, exploring lots of new concepts. If you're a very particular kind of nerd the kits are available for purchase.


  1. Sharing is, perhaps, a strong word. The embedded code can't import any dependencies, so there's still a lot of code duplication. However the files live inside a Swift package, so I can at least run tests.

  2. I'm just realizing I haven't posted any updates about that in over a year.

  3. I very much would like to try to design a version that's built into a GameBoy cart.

Far too long ago I announced a major new version of ScreeningRoom, thinking that its release was both just around the corner and that it was unquestionably better in all ways. I was wrong on both counts. I heard from beta testers that they were still using the previous version because the menu bar functionality was critical, even with some of the bugs.

After mulling things over I decided to bring the improvements from V2 back to the V1 app. What this means in practice is the latest update still lives in the menu bar, but the underlying preview framework is all new.

Bug Fixes & Improvements

One of the most important changes and fixes is to the crosshair. The positioning bug has been fixed, no longer will the crosshair run off of edge of the preview. Another change here is the crosshair always tracks the mouse, even when the previews are paused and the lines run across the entire preview area, not just the display the mouse is in. This makes locating the cursor even easier.

The preview windows have also been rebuilt from the ground up. The popover now has a toolbar at the bottom with playback controls for quickly starting and stopping all displays. You can create separate windows for each display from the Display icon, which also has individual playback controls. Each preview window has independent playback control, even for the same display.

New Features

There are also a couple of new features, that weren't even available in the V2 beta that pertain to how new windows behave.

The first is "Play streams by default", which does what it says on the tin. If you want to quickly see another screen you can now have the preview appear already playing the screens, no need to start them manually.

Second is an option to hide the main display by default. In most cases the main display is the one screen you can always see, so having it show up in the preview isn't terribly useful. This preference takes care of that.

Availability

The update is rolling out now and should show up automatically in the next day, and is also available to download from the website. If you haven't yet tried ScreeningRoom a full-featured 14-day trial is available.

In high school I loved playing a little Flash game called Planarity. I've absolutely no idea how I stumbled upon it, but I played it most days in the library before classes if I got to school early. In the intervening years between then and now it would occasionally cross my mind as a fun memory. In the years since I learned to code it's also occasionally crossed my mind, but with a more concrete notion: it would be fun to build an implementation of the game.

Last week it crossed my mind again, and this time I had an idea of how to actually build the game. I was wrong.

The algorithm behind Planarity is both quite simple, but also more involved than I'd thought, as a player of the game. The creator of the game, John Tantalo, was gracious enough to publish the algorithm in the public domain. The game isn't just a series of random vertices and lines, but rather built on an underlying assortment of intersecting lines, which in turn create the puzzle to solve.

I was able to reuse a lot of my work from SwiftGraphics which has most the basic geometry code needed for doing line intersections and such. The hardest part was figuring out how to build the actual graph from the original intersection points. I'm not sure if my implementation is the most efficient, but it does seem to work.

The entire app is built in SwiftUI, including the actual graph. It does take some work to convince SwiftUI to do absolute positioning the correct way1, but it provides some really nice options for higher level things like styling and interaction. Planarity is perfectly suited to being played on the iPad, especially with an Pencil.

There are still things to polish and features to add. For instance, I'd like to add a more comprehensive records section, keeping track of puzzle history like the fewest moves in a level to solve the graph and the total number of moves. It would also be fun to export an animation of the puzzle being solved2.

I would like to release the game when it's ready, even if no one else is interested in it. The app needs a little more polish, and a more original name. In the meantime I have graphs to untangle.

Update: I'm done the polishing and the game is now available to beta test on TestFlight.


  1. .frame().position(). not .position().frame()

  2. Perfect for all of that social media clout.

It's a been a little while since the last Capture One Stream Deck plugin update, but it's worth the wait. The latest version packs in a lot of new features and some important bug fixes. The update is available now in the Elgato marketplace.

New

  • Color tag and rating key actions can now toggle their value
  • Flip Crop action
  • Select last n images for quick review
  • Toggle stored filters in the current collection

Changed

  • Actions require selecting a value, rather than falsely indicating a default
  • Open Collection now has a folder icon
  • Sort action can now sort by all properties
  • Sort action toggles between two user selected options

Fixed

  • Plugin more reliably connects to Capture One on first install
  • Dial actions show their titles when an image isn't selected