Sonoff L1 & Home Assistant

10 Aug 2020

Recently I bought a bunch of new Sonoff IoT devices, among them was the Sonoff L1: a WiFi connected LED strip. Like all of the other Sonoff equipment I figured it would be supported by ESPHome for easy integration with Home Assistant. As it happens, this wasn’t the case. The whole project ended up being far more complicated than I expected.

Flashing the Controller

The L1 doesn’t expose any headers, so you’ll need to solder wires to the UART pads. Luckily this only requires a few minutes of soldering and you’re good to go.

Flashing the ESP8285 proves a little more difficult as there’s no exposed GPIO0 pin. The Tasmota guide has a good illustration showing which pin on the chip itself needs to be shorted to ground. While this seems like it might be tricky, you get the hang of it.

Once the ESP is booted into programming mode it’s an easy matter to flash ESPHome which provides OTA updates for future changes.

The Software

I’d initially followed some guides for a similar LED controller which used PWM to controll the RGB channels, however, the L1 uses a Nuvoton N76E003 to drive the LEDs. The ESP comunicates over serial to the Nuvoton. Some clever people over on the Home Assitant forums managed to figure out how the board works. Everything from serial protocols to commands.

After several rounds of “I don’t know enough to make this work” I eventually started looking at the Tasmota implementation along with the ESPHome custom light docs and began coding up a custom component. Eventually I managed a basic, but working, implementation which parsed Home Assistant’s input and passed it along to the LED strip.

I would like to figure out how to support some of the other modes in the UI. The music sync mode has parameters for both sensitivity and speed. There are also severl “DIY” modes which allow the user to specify colors for the controller to cycle through.

Putting It All Together

I’ve uploaded the custom component and an example config on GitHub. All you need to do is download the sonoff_l1.h into your ESPHome directory and update led_strip_1.yaml with the specifics of your installation and you should be off to the races.

There are some limitations, at time of writing. While Home Assistant can send changes to the L1, the remote won’t push changes back to Home Assistant. Additionally not all of the L1 modes are fully supported yet.

SwiftGraphics

28 Jul 2020

It’s been a couple of months since I initially began working on SwiftGraphics and it’s already grown quite a bit, and most of that growth has come along with the requisite growing pains. With very few exceptions, every time I’ve started a new generative piece I’ve found a part of the code-base that was fundamentally broken in some way. The good news is that this forced me to actually write tests for more of the code.

One of the largest changes I made was to the ray tracing system. The original version required each ray to have a termination point on a boudning box, along with a separate RayTracer protocol which allowed an object to modify a ray. The upated version unified a couple of protocols, allowing any Shape that can calculate intersections to handle1 rays.

With the new ray tracer in place the first real feature to come from the update was recursive ray tracing. Simply, this meant that a ray could interact with objects multiple times. For instance, two mirrors reflecting rays back and forth.

Another piece of low-hanging fruit I’ve been after for a while was Perlin noise. Much like everything else, there were a few false starts, but eventually I implemented the p5.js version of Perlin noise. Naturally the thing to do was to use the generator to create plots of Perlin noise as a linear sequence.

Finally, as was linked to at the top of the post, I’ve open-sourced SwiftGraphics. This comes with all of the changes mentioned above, along with actual documentation and an example app. The library is mostly built for my needs, but I’d love to get feedback to make it more useful for everyone.


  1. In most cases this simply means the primitive shape returns an empty array, which has the effect of casting a shadow. [return]

Capturebot 2020.2

28 Jun 2020

Capturebot

Last week I released the first update to Capturebot, version 2020.2, with a slew of new features and bug fixes. The update is mostly centered around making working with mappings easier.

The biggest change is a shift away from a single window to a document model, which means mappings can now be saved and shared. For a digital tech this is a perfect way to keep different mappings for each client. On the studio side mappings can be easily distributed to workstations for freelance techs on location. Mappings can also be locked to prevent editing.

There are a number of updates related to importing images. Folders of images can now be opened (along with their subfolders) and files can be dropped onto the images view. There’s also support for opening audio and video content; anything that Exiftool can write to is supported.

Another major feature is a new, free, write-only mode. After the trial expires Capturebot will still be able to open mappings and write metadata, however none of the mappins can be changed.

The final big update is the addition of a small shotlist view. This brings Capturebot full-circle back to its origins as a shotlist tracking application. The plan is to enhance the shotlist over time and make it a larger part of the application.

You can check out all of the changes and download a trial over at the website.

Generative Art in Swift

23 May 2020

For the past few days I’ve been working on my own Processing-like graphics library built in Swift. I got tired of working in JavaScript, and for lack of a better option, decided that writing my own library would be the way to go.

I didn’t want to simply port Processing into Swift, someone’s already done that. Instead I wanted to make a Swifty-er version that was more protocol and object oriented. Today I got it to a state where it’s functional and I’m happy with how it’s implemented.

One of the main reasons for wanting to build my own library was to make a nicer ray tracing system. I’d made a simple version in P5.js, but the idea of adding more features using JavaScript was… unappealing. After the basics (circles, rectangles) I wanted to add Fresnel lenses.

I tried several methods to figure out the angles. Initially I wanted to use vectors, but bailed on it (because I hadn’t implemented all of the vector math required for the operations) and decided that I should be able to use Pythagorean’s theorem to make a triangle from the line and calculate the angles. This initially worked, but ended up with far too many edge cases, everything from which way the line sloped, to horizontal and vertical lines.

Eventually I went back to the vector math. I knew that it was something that was possible, easy even, because of an example demonstrating non-orthogonal reflection from P5.js. After adding some more vector math operations I was able to reliable determine the angle perpendicular to the Fresnel lens line. This also gave the lens a single direction; it acts as a Fresnel from one side, and a collimating mirror from the other.

Processing & Generative Art

01 Apr 2020

I follow a number of people on Twitter who make generative art, and it’s something I’ve been wanting to try myself for a while. With everything going on I finally made the time to sit down and really play with a tool called Processing.

One of my favorite generative artists, Anders Hoff of Inconvergent, does some amazing work with plotters. My favorite piece of his is 706dbb4. I love the way the intersecting lines form a sphere, and the lines that break out of the confines of the circle are almost solar flares.

I wanted to attempt to generate a piece inspired by this print, and wanted to create something that could be drawn on a pen plotter.

I started off by connecting random vertices on the edge of a circle to one another, creating a random web of lines. Eventually I added a limit to the angles in which the vertices could be, which adds a more abstract character. After adding randomization to all of the parameters, I just kept rendering new images.

With some advice from Andrew Rodgers on Twitter to play with more advanced math functions like sine, cosine, exponents, etc. This came right as I was playing with using Bezier paths instead of straight lines. I created another set of vertices and modified their positions:

anchorPoint = pointOnCircle(center, center, circleRadius + 100, tan(r))
anchor1X = random(0, canvasSize)
anchor1Y = anchorPoint[1]

ancho2X = anchorPoint[1]
anchor2Y = pow(log(anchorPoint[0]), 3)

The blue and green squares are the anchors that Bezier paths move through, each with a start and end point on the circle:

The new images are much more energetic, and when the angles are small produce an internal flare of lines, as seen in the third image below.

All of the images (and possibly more, over time) are available over in the new Generative Art gallery. I hope this will be a fun new area to explore over time.

A Useless Picture Frame

27 Mar 2020

Many months ago my father gave me an old, delapidated, film splicer. I had the intention of cleaning it up and displaying it somewhere? Maybe? I didn’t really think it through, but I do love weird old things like that. The splicer was shuffled around a few times by me and my roomates, and eventually ended up sitting at the end of a wallway, mostly forgotten. Util two days ago.

Due to circumstances beyond my control I’ve found myself with some, ahem, extra free time, and while walking down the afformentioned hallway, saw the splicer, and knew extactly what to do with it.

The Hardware

The whole system fits neatly inside the original film viewer.

The whole system fits neatly inside the original film viewer.

A 1.44” TFT display, Wemos D1 mini, and battery are housed in the film viewer, which originally held a lightbulb in the top portion. I wired the original power switch to continue to control power for the system, and the barrel jack to charge the battery is in place where the original power cable went.

The Software

While the hardware was relatively straight forward, the software was another matter. This was the first time I’ve dabbled with running a display on an ESP board before.

I was thrilled when I eventually managed to get the C++ working to display a line of text, and even more so when I had a tiny web server running that let me remotely push text to the screen. Displaying images, however, was another story. Eventually I decided to switch from Arduino to MicroPython, which is a much more familiar programming environment.

I eventually found a display library that worked and was off to the races. With a little bit of Python I wrote a very simple slideshow player that pulled images out of a folder read them out and displayed them. After a few iterations the player even saved its current position between boots.

The files load incredibly slowly, and are displayed line-by-line, which I think gives a rather charming transition between images.

Displaying Images

The display I used has an astonishing resolution of 128x128 pixels, and a mere ~100x65 pixels that are actually visible through the magnifier. This, in my opinion, results in some delightfully lo-fi images.

The images need to be resized, matted, and converted to bitmaps before being uploaded to the board.

An image ready for display

An image ready for display

In Conclusion

The entire project only took a couple days and was built using entirely with parts I had lying around. When I showed my roomates the end result they had trouble even telling what the images were, and I think that’s part of the fun. This was a project just for me to experiment and try something new.

That being said, if anyone would like to build their own, all of the source code is published on GitHub.

CaptureCursor

18 Feb 2020

One of the tiniest annoyances using computers these days is keeping track of the cursor. On a single display losing the cursor isn’t terrible, macOS has a feature to help you find it, as long as you can see all of the displays. If you’re running a set with a remote viewing monitor it’s easy to accidentally move the cursor out of view and wrangling it back isn’t always easy.1

CaptureCursor does one thing: move the cursor to the center of the current Capture One session or catalogue window. All you have to do is hit a keyboard shortcut and CaptureCursor identifies the session, makes Capture One the active application, and centers the cursor in the window.

Much like my Capture One scripts, I wrote CaptureCursor for me to use on set, but I hope other DTs will find it useful. CaptureCursor is avaialble for $4.99 with a 14 day trial.


  1. Who amongst us hasn’t got the cursor stuck in the mis-aligned edges of a second display? [return]

Capturebot: Take 3

07 Feb 2020

I’ve written rather a lot about Capturebot, but it’s almost never been the same piece of software. It’s been a name in search of the right project, and if I’m lucky this will be it.

Something that I end up dealing with quite often is embedding metadata in images. Usually this involves a lot of copying and pasting from a spreadsheet (or worse: a PDF), double checking that the formatting, splitting keywords up properly, ensuring every image has the right metadata, etc. I’m sure every digital tech knows the drill.

Capturebot Redux

The new incarnation of Capturebot is a tool to help manage your metadata. One of the most tedious tasks on a shoot is ensuring every shot has the correct metadata. On larger shoots where each image needs to have the correct keywords, crew names, and product information doing this manually can be time consuming and error prone.

Capturebot will read a CSV, parse the metadata inside, let you map the fields your DAM uses to real EXIF tags, and then assign everything to the correct images.

Filenames in the spreadhseet can be matched in a number of ways. Although the most common is probably using the name as a prefix, you can also set up complex matches using regex to select images based on any criteria.

Public Beta

Capturebot is currently available as a public beta for everyone to test. You can find more information and the download link at capturebot.app.

Please do let me know about any bugs, and even more importantly, any feature requests. I’m hoping to get more techs as beta testers so I can make a better tool for all of us.

You can get a beta license from the Capturebot > Request Beta License menu.

GeoVisualizer

29 Jan 2020

Shortly after XOXO in 2018 I began running a Compass server to track and store my location. At some point in the future I wanted to have a cool visualization of everywhere I’d been, and nearly a year later I got around to looking at the data.

I wanted a good way to visualize the data I’d collected and began testing a tool called MapShaper, which lets you manipulate GeoJSON files and output SVGs. By default MapShaper will just plot each point as a little black dot, which isn’t terribly interesting.

I experimented with ways of applying color to the map to make it a bit more interesting. One of the first things I tried was varrying the color by altitude. Compass collects a lot of points, so they blend together into a line.

The result was pretty, but didn’t really say anything about what I had done there. I wanted something a little more abstract. Eventually I learned that MapShaper can reduce the precision of coordinates and count features, and combining these meant I could get a count of how long I’d spend at a certain location.

With a bit more scripting and a surprising amount of JavaScript I had a new look. The coordinates are reduced to a few significant digits, which aligned them to a grid, and each grouping is counted and colored based point count.

After a little while I got tired of having to manually change values in scripts, so I wrote a little GUI wrapper around my scripts, and then… went a bit further.

I wanted to recreate the same visualization on a live map, so I turned to Mapbox.

Mapbox required a lot of trial and error, but I eventually had all of the pieces in place. The map groups the points dynamically, colors them by count in the group, and can handle far more points than MapShaper can.

Something I didn’t expect after putting the live map together was how much fun it would be to simply zoom around and revisit places I’d been previously. I was especially happy when I found a few GPX files I’d recored back in 2012 while studying in France, I just wish they were more complete.

Archive