I’ve been meaning to revisit my ray tracing system for a while, but never really had the motivation to dive back in. The way I’d built it was fragile and prone to infinite loops. The infinite loops are what finally got me to take another look: recent changes I’d made caused several sketches to never resolve.
My original ray tracing system simply collected line segments from each object it intersected. One of the big “improvements” I made was allowing objects to be reevaluated so a single ray could interact with the same object multiple times. Unfortunately I accomplished this with a recursive method. The closest intersection was found, then the method would be called on the next object. All of the intersections would be collected at the bottom of the stack and passed back up when the ray finished its journey. If it finished its journey.
A Formal Ray
For the second (major) attempt I switched over to a real
Ray object to keep track of intersections. Another part of the code I updated was how a ray could be modified. A new
RayTracable protocol gives an object the chance to modify a
Ray directly, with the default to simply terminate it.
Once the new
Ray was set up I began testing and quickly noticed that my circle intersections were wrong. The point of deflection wasn’t actually on the perimeter of the circle. I eventually discovered that I hadn’t understood the
t value, which is used to find the point of intersection on the ray, correctly. I found an article which explained what I was missing and managed to fix the method.
Each time I conformed a shape to
RayTracable I’d find similar issues. After a spot of pest control I finally had a system that behaved correctly.
Putting it All Together
Once the math was taken care of I revisited my largest ray tracing sketch. I’d never set out to necessarily make a performant ray tracer, but it was exceptionally slow. Looking into the drawing code I found I was generating the paths for each drawn layer. A quick change to separate the calculations and the on-screen drawing helped a lot.
The final tweak I made was how I was rendering the layers. Before I’d updated the drawing code I ended up with a particularly nice sample render where the rays would pool into bright spots, which I then accidentally “fixed” and couldn’t replicate. Eventually I figured out the correct blend mode and drawing order to be able to control the effect to great success.
Over the summer, while California was in the middle of fire season I decided , mostly out of a morbid curiocity I suppose, to monitor the air quality inside my house. There are a number of commercial solutions available, however they’re all either complex, costly, or both. I was looking for something that could take the place of my existing Wemos & ESPHome-based temperature and humidity sensors.
After a bit of research I found the PMS5003T, which measures PM 2.5, temperature, and humidity. The sensor uses a tiny ribbon cable, so they can’t be connected directly to the Wemos’ pins. I designed a sinple circuit board for the sensor, with additional pins so it can be extended for future use. Unfortunately I wasn’t paying attention to the conversion of mils and didn’t make the mounting holes anywhere near learge enough.
I still need to find some little boxes for them, for the most part they’re just tucked into corners.
The software is pretty straightforward. The boards run ESPHome with the stock
pmsx003 integration. The only real customization is a sliding window moving average to reduce jitter.
sensor: - platform: pmsx003 type: PMS5003T pm_2_5: name: "Office Emory Particulate Matter" filters: - sliding_window_moving_average: window_size: 15 send_every: 15 temperature: name: "Office Emory Temperature" accuracy_decimals: 0 filters: - sliding_window_moving_average: window_size: 15 send_every: 15 humidity: name: "Office Emory Humidity" accuracy_decimals: 0 filters: - sliding_window_moving_average: window_size: 15 send_every: 15
I also have a handful of spares, so if you’re interested I can send either a DIY kit or a fully assembled version.
I’ve been having a little bit of “plotters block” recently, but I finally started making some new plots today. While doing that I remembered a series of plots I’d forgotten to add to the gallery. These plots are built around the addition of geometric boolean operations to SwiftGraphics.
There are also some fun time lapses of some of the plots.
Last week I was listening to the latest episode of Make Do and near the end the conversation turned to generic art from a department store.
The mass-produced prints from IKEA, Target, etc. remind me of bringing store-bought baked goods to a party versus making a cake. Both fulfill the function of being a dessert, but they aren’t equal. There’s an intentionality to baking a cake or finding a unique print for your wall that the store-bought version doesn’t have.
When you bake a cake or hang “real” art you’re asking other people to ask you about it. Not just in the sense of fishing for compliments (though who doesn’t want people to say you have great taste?), but about the journey you went through for it to be here, now. How you baked the cake, or finally bought a piece by an artist you admired, or how you stumbled upon the piece at a local farmers’ market.
A generic photo of pebbles or the Golden Gate Bridge is there to be, effectively, a placeholder in your room. It fills space in an aesthetically pleasing way, but doesn’t prompt a conversation. Someone might say they like it, but the only conversation to be had is “thanks, I picked it up at Safeway on my way here.”
Recently I bought a bunch of new Sonoff IoT devices, among them was the Sonoff L1: a WiFi connected LED strip. Like all of the other Sonoff equipment I figured it would be supported by ESPHome for easy integration with Home Assistant. As it happens, this wasn’t the case. The whole project ended up being far more complicated than I expected.
Flashing the Controller
The L1 doesn’t expose any headers, so you’ll need to solder wires to the UART pads. Luckily this only requires a few minutes of soldering and you’re good to go.
Flashing the ESP8285 proves a little more difficult as there’s no exposed
GPIO0 pin. The Tasmota guide has a good illustration showing which pin on the chip itself needs to be shorted to ground. While this seems like it might be tricky, you get the hang of it.
Once the ESP is booted into programming mode it’s an easy matter to flash ESPHome which provides OTA updates for future changes.
I’d initially followed some guides for a similar LED controller which used PWM to controll the RGB channels, however, the L1 uses a Nuvoton N76E003 to drive the LEDs. The ESP comunicates over serial to the Nuvoton. Some clever people over on the Home Assitant forums managed to figure out how the board works. Everything from serial protocols to commands.
After several rounds of “I don’t know enough to make this work” I eventually started looking at the Tasmota implementation along with the ESPHome custom light docs and began coding up a custom component. Eventually I managed a basic, but working, implementation which parsed Home Assistant’s input and passed it along to the LED strip.
I would like to figure out how to support some of the other modes in the UI. The music sync mode has parameters for both sensitivity and speed. There are also severl “DIY” modes which allow the user to specify colors for the controller to cycle through.
Putting It All Together
I’ve uploaded the custom component and an example config on GitHub. All you need to do is download the
sonoff_l1.h into your ESPHome directory and update
led_strip_1.yaml with the specifics of your installation and you should be off to the races.
There are some limitations, at time of writing. While Home Assistant can send changes to the L1, the remote won’t push changes back to Home Assistant. Additionally not all of the L1 modes are fully supported yet.
It’s been a couple of months since I initially began working on SwiftGraphics and it’s already grown quite a bit, and most of that growth has come along with the requisite growing pains. With very few exceptions, every time I’ve started a new generative piece I’ve found a part of the code-base that was fundamentally broken in some way. The good news is that this forced me to actually write tests for more of the code.
One of the largest changes I made was to the ray tracing system. The original version required each ray to have a termination point on a boudning box, along with a separate
RayTracer protocol which allowed an object to modify a ray. The upated version unified a couple of protocols, allowing any
Shape that can calculate intersections to handle1 rays.
With the new ray tracer in place the first real feature to come from the update was recursive ray tracing. Simply, this meant that a ray could interact with objects multiple times. For instance, two mirrors reflecting rays back and forth.
Another piece of low-hanging fruit I’ve been after for a while was Perlin noise. Much like everything else, there were a few false starts, but eventually I implemented the p5.js version of Perlin noise. Naturally the thing to do was to use the generator to create plots of Perlin noise as a linear sequence.
Finally, as was linked to at the top of the post, I’ve open-sourced SwiftGraphics. This comes with all of the changes mentioned above, along with actual documentation and an example app. The library is mostly built for my needs, but I’d love to get feedback to make it more useful for everyone.
- In most cases this simply means the primitive shape returns an empty array, which has the effect of casting a shadow. [return]
Last week I released the first update to Capturebot, version 2020.2, with a slew of new features and bug fixes. The update is mostly centered around making working with mappings easier.
The biggest change is a shift away from a single window to a document model, which means mappings can now be saved and shared. For a digital tech this is a perfect way to keep different mappings for each client. On the studio side mappings can be easily distributed to workstations for freelance techs on location. Mappings can also be locked to prevent editing.
There are a number of updates related to importing images. Folders of images can now be opened (along with their subfolders) and files can be dropped onto the images view. There’s also support for opening audio and video content; anything that Exiftool can write to is supported.
Another major feature is a new, free, write-only mode. After the trial expires Capturebot will still be able to open mappings and write metadata, however none of the mappins can be changed.
The final big update is the addition of a small shotlist view. This brings Capturebot full-circle back to its origins as a shotlist tracking application. The plan is to enhance the shotlist over time and make it a larger part of the application.
I didn’t want to simply port Processing into Swift, someone’s already done that. Instead I wanted to make a Swifty-er version that was more protocol and object oriented. Today I got it to a state where it’s functional and I’m happy with how it’s implemented.
I tried several methods to figure out the angles. Initially I wanted to use vectors, but bailed on it (because I hadn’t implemented all of the vector math required for the operations) and decided that I should be able to use Pythagorean’s theorem to make a triangle from the line and calculate the angles. This initially worked, but ended up with far too many edge cases, everything from which way the line sloped, to horizontal and vertical lines.
Eventually I went back to the vector math. I knew that it was something that was possible, easy even, because of an example demonstrating non-orthogonal reflection from P5.js. After adding some more vector math operations I was able to reliable determine the angle perpendicular to the Fresnel lens line. This also gave the lens a single direction; it acts as a Fresnel from one side, and a collimating mirror from the other.
I follow a number of people on Twitter who make generative art, and it’s something I’ve been wanting to try myself for a while. With everything going on I finally made the time to sit down and really play with a tool called Processing.
One of my favorite generative artists, Anders Hoff of Inconvergent, does some amazing work with plotters. My favorite piece of his is 706dbb4. I love the way the intersecting lines form a sphere, and the lines that break out of the confines of the circle are almost solar flares.
I wanted to attempt to generate a piece inspired by this print, and wanted to create something that could be drawn on a pen plotter.
I started off by connecting random vertices on the edge of a circle to one another, creating a random web of lines. Eventually I added a limit to the angles in which the vertices could be, which adds a more abstract character. After adding randomization to all of the parameters, I just kept rendering new images.
With some advice from Andrew Rodgers on Twitter to play with more advanced math functions like sine, cosine, exponents, etc. This came right as I was playing with using Bezier paths instead of straight lines. I created another set of vertices and modified their positions:
anchorPoint = pointOnCircle(center, center, circleRadius + 100, tan(r)) anchor1X = random(0, canvasSize) anchor1Y = anchorPoint ancho2X = anchorPoint anchor2Y = pow(log(anchorPoint), 3)
The blue and green squares are the anchors that Bezier paths move through, each with a start and end point on the circle:
The new images are much more energetic, and when the angles are small produce an internal flare of lines, as seen in the third image below.
All of the images (and possibly more, over time) are available over in the new Generative Art gallery. I hope this will be a fun new area to explore over time.