Jump to:   2013   2014   2015   2016   2017   2018
Throughout my years at high school, at art college and as a NEET, I gathered lots of skills with video and sound production, and programming. These have come in very handy for all sorts of things on the channel.

2013

There are rain filters available for the editing program I use, Adobe After Effects, but they cost money. I decided to write a computer program myself to generate rain footage. This took a few days to get right. Multiple "sheets" of rain, each at a different distance and with its raindrops falling at a different speed, are superimposed over a transparent backdrop so that the mix - a PNG sequence - can be laid over footage without needing any (harmful) chroma-keying. The result is pretty good.

Now the lightning. AfterEffects has a lightning effect, but I wanted as much of the channel to be my own work as possible, so again I wrote a custom program. This wasn't as successful as the rain; I should have spent longer on it. However, the result is quite simple and endearing, as in a certain 1980s cartoon, so it's not all bad.

The thunder sounds were most fun to do. Of course, real thunder audio can be easily obtained on the Internet but that didn't seem right. Again, I wanted it to be my own work. I remembered that I had read somewhere that a good way to fake thunder was to record the sound of paper being crumpled and then greatly lower the pitch. After some experimentation, I found that -35 semitones gave a good result. This is how most of the thunder heard on the channel was created. A small amount was also created by rolling a wheelie-bin around and, again, down-pitching it.

Lightning sounds were made by a variety of means. I set up a few VST synths in Fruity Loops to emit bursts of noise, then manipulated the result in Sound Forge. I also used a range of sources, including early '90s hip-hop songs, as random audio "stuff" that was massively distorted, speeded up, re-pitched, chorused, etc.

The sound of rainfall was achieved by simply creating fuzz (see above) and removing everything but the highest frequencies.

Lastly, wind sounds. Ideally these would have been made with a hardware synthesiser, me twiddling the filter cut-off over a white noise generator. However, neither of my hardware synths (the Ensoniq Fizmo and the Elektron SidStation) could generate pure white noise, so I had to use software synths (such as Krakli's Gargoyle VST) to generate noise that went through automated LFOs. I made the sounds more erratic by manually pitch-bending them in Sound Forge.

The opening sequence (or "intro") originally started with an animation of the 9/11 attacks; the bolt of lightning which now opens each video was originally a "response" to the plane colliding with the North Tower. I removed this prelude because it felt disrespectful to use the atrocity in this way. The rest of the sequence stayed as planned: the channel title "wipes" over the screen like the titles in a 1930s horror film, then another thunderbolt heralds the episode title. Fade into the video, and the sound of rolling thunder gradually dies down over the next thirty seconds.

The closing sequence (or "outro") was intended as a gentle way to let the viewer detach from whatever had been said during the video, and perhaps contemplate it on their own. For this reason it is much longer than necessary - 25 seconds. There is also a 10-second lead-in, with wind sounds slowly rising, signalling to the viewer that the video is about to finish. During the closing sequence, several contact details are shown.

The original intro was used until 2017, and the original outro until 2018. They were modified here and there along the way, but I will not detail these changes as they are so trivial that I can't remember them any more.

2014

The MW font is stored as transparent PNGs (one for each alphanumeric character) that are specially styled with multiple layers each with its own glow setting, etc. Piecing these PNGs together to form any particular title quickly became very tedious, so after several months I wrote a program to do it automatically.

In June 2014, with the centenary of WW1 looming, I decided to make a series about it, Flanders Readings. This was an opportunity to use my Fizmo on the channel for the first time. BASIC was used to create several visual effects for the series, including the moving blotches of colour in the intro. It was also the first time that I wrote a program to automatically generate an animation for each episode in a series - in this case the title card for each episode, rendered in an "old typewriter" font and being typed up on-screen, complete with sound effect. The syncing of the sound effect was badly done, because I had not yet thought of employing Chrome to do these sorts of things (see 2017). Flanders Readings was quite a technical enterprise, with several episodes having little programs written especially to create custom animations for them. The episode How the Lights Went Out used at least five such programs.

In August, the channel reached 1000 subscribers. I made a celebratory video. In this video, a vocoder appeared for the first time on the channel - a very brief vocalisation put through the Fizmo's on-board vocoder.

In October, I began a new series: "reply" videos. This required custom intro and outro sequences, and in these I introduced the 1920s German Expressionist aesthetic that has since become a mainstay of the channel. It is achieved by completely desaturating the image, increasing the contrast, and then overlaying solid blue with the "Hue" blending mode, and then solid white overlaid with the "Overlay" blend mode. Then lumpy film grain is overlaid on top of that. (This grain was obtained by pointing the camera into darkness, and increasing its ISO to some ridiculous level that it couldn't handle. The resulting footage is basically pure grain, which can be overlaid onto anything.) Later I began using a similar treatment on the main footage of the Reply videos to give them a unique aesthetic - the footage itself is desaturated, its contrast increased, and then a solid of RGB(0,108,255) overlaid using the "Soft Light" blend mode.

The original recording set-up (the FZ200 camera on its own!) was maintained for 10 months until I got a microphone at the end of October: the Blue Yeti. This hugely improved the audio in my videos.

neoreactionary testosterone (Nov 20) (BASIC animation of blood shards)

Also in November, I began another new series: Skype recordings. This again required custom intro and outro sequences, done in the same style as the reply videos. But the Skype Recordings are interesting for a different reason. Since I expected to release a lot of Skype recordings, I decided to quicken the production process by automating it, to a great extent. I wrote a fairly elaborate software processor which takes the finessed audio recording and builds a video around it, complete with the pre-filmed intro, a custom title card, a looped animation throughout the conversation, and a pre-filmed outro. It also automatically creates the video thumbnail.

Finally, in December, I bought some LittleBits modules. These are essentially synth blocks that can be joined together in various ways to create any number of different custom configurations to generate sound. One module is a white noise generator and another is a resonant low-pass filter. These two, along with power and a speaker module, are enough to create wind sound effects, which was my main desire. However, I also invested in the Arduino module (which can be programmed) and used it to write a custom random LFO to modulate the filter independently.

2015

Comparatively few new special effects were created in 2015, at least until August. At this time I launched The Rot, my study of the Rotherham scandal. For this I wanted to create an aesthetic very different and separate from that of the main channel. I spent quite a great deal of time shaping this aesthetic, eventually deciding that it should be cold, gritty and unsettling, but also haunting and soulful. The Fizmo could certainly have some input there, but my main tool would be my other hardware synth, the SidStation. It is built around the Commodore 64 sound chip so has a very distinctive sound. I also used a lot of sounds that had been created with various software synths over the years, as far back as 2002, and stored on hard drives waiting to be used one day. As for the visuals, among other things I used strange video static created many years before by my old Digital8 camcorder (by forcing it to read an analogue tape in digital mode, or vice versa). For the teasers for this series, I also used old footage recorded in 2000 on the same camcorder. A special program was written to generate a "bleeding" animation for the trailer. Another program was written to generate the flow of episode titles that appeared in the "reveal" video, which is one of the videos I am most proud of in the entire channel's life.

In September, I decided to begin a new series of deliberately introspective monologues, called the Monday Moan. Like the Skype recordings, these would be audio recordings processed into videos automatically by a new piece of software which was derived from the SR processor.

In November, I began a new format for long videos: the multi-parter. This required the creation of "book-end" sequences for the start and end of intermediate segments. For these I created a new, incredibly "whooshy" wind sound effect using a second-hand hardware synth I had just bought on eBay, a Korg EX800. (Unfortunately this synth, being 32 years old, broke only a fortnight after arriving!)

Later in November, my channel reached 5,000 subscribers. I felt it was appropriate to make a celebratory video. For this, software was written to extract subscriber data from YouTube and generate an animation sequence to precisely reflect the growth trend visually.

2016

At the start of January, for some reason I decided to make a video celebrating two years of the channel's life. I wanted it to document the channel's development, in terms of how many videos had been made, how many series begun, etc. Each of these required a bespoke program to generate a chronologically-precise "ticker" animation. All of these animations were combined in After Effects, as usual. Ambient sounds were created using LittleBits and the Korg Kaoss Pad Quad. A cheerful tune, recorded many years before on the Fizmo, was used as the main music.

In February I began planning a new aesthetic for the channel. It was going to be very ambitious, involving a model set, slow-motion filming, etc. Eventually I realised I simply had too much other work to do, that was far more pressing, and these new title sequences (which would have created a unified look across all the strands of the channel) was an indulgence. However, while planning it, I bought a new camera, the Sony RX100. I chose it because it could do slow-motion filming, but soon realised that its image quality was simply incredible, so decided to use it for all filming (from June onwards).

In April the channel reached 10,000 subscribers, and it was time for another celebratory video. This one was more technically elaborate than the 5,000 video, featuring a flickering line that crossed the screen, showing the subscriber growth in a particularly beautiful way.

In August, there was a sudden need to make a video introducing the general public to the Alt-Right. This video was made in a 24-hour stretch and involved FL Studio, Sound Forge, Photoshop, After Effects, and three custom-written BASIC programs.

In September, I created a new intro for the regular videos, this time involving some basic 3D animation (done with the antiquated Daz Bryce) as well as my usual tools. However, for some reason I didn't start using this new intro until 6 months later, in February 2017.

In October I prepared for my trip to America - the Dangerous Haggis Tour. Since this was my first real visit to America, I was determined to make as many videos as possible, documenting my experiences. Obviously I was going to need a portable device on which to create videos, but I didn't have a laptop. I should probably have taken the leap and spent enough for a decent laptop, but, afraid of splurging cash, I instead chose a cheap little tablet. It was very portable and very cute, but would be unable to run Adobe After Effects. I decided to "go lo-fi" and make videos in a semi-automated way. A program was written to combine all the footage files in a specified folder, generate a custom title sequence, and combine these elements into a finished video. This software took a few weeks to write (and was improved in little ways throughout my time in America) but worked very effectively with minimal computing power.

For the Dangerous Haggis Tour, I also invested in a new microphone: the Zoom H2N. Intending it only for outdoor recording, I eventually used it for everything.

2017

In January, my life was severely disrupted when I was doxed and had to flee Scotland. I couldn't take my large computer with me, so was now forced to buy a decent laptop that could handle heavy tasks and not drive me nuts by taking ages to do things. Thus, I spent £1600 on an Asus Zenbook Pro.

Once settled abroad, and still feeling quite depressed and listless, I treated myself to a new hardware synth: the Studiologic Sledge. It is a strange beast, digitally emulating analogue yet never sounding anything other than 1980s digital. It works along analogue lines, so is very easy to work with, but always brings that fun 1980s texture to everything. It is much more straightforward than the Fizmo, but, I confess, less interesting - probably because it is so straightforward.

In May I was planning a trip to London. I expected to record monologues on the Tube. These would, of course, be processed into videos for the channel, so I wrote software to map the tube journey for each one. This would be extremely difficult to do with manual coding, but I had the bright idea to employ Chrome (the web browser) to do this task for me. Being an HTML5 browser, it can perform all sorts of very complex calculations involving SVG paths, which was what I needed. I found a way to communicate between custom software and Chrome, and soon had a complete system set up for generating unique animations illustrating a tube journey (even involving multiple tube lines) and all timed to match the length of the monologue. It was a great achievement for me in terms of programming, being a blend of very complex coding in both JavaScript and BASIC. However, once in London and taking rides on the Tube, I realised that the ambient noise made it impossible to record monologues. So, this software would never be used!

However, later on, the framework for employing Chrome was separated out from the Tube code, and refined and made generic, and is now used to do all sorts of things with Chrome (see below). So this project was not a complete waste of time after all.

For some time, I had wanted to improve the code for generating lightning bolts. Each frame was being drawn randomly, and was therefore completely different from the one before it. Thus, there was no "flow", as there should be with electricity. I took some time to write new code which works in an entirely different way. It involves "pulses" that travel along a straight line, at different speeds and with different impact levels. In each frame, at every point along the line, the cumulative impact of all present pulses dictates the divergence from the centre. This results in a beautiful evolving lightning bolt which can be rendered at any speed and for any duration.

Having learned how to employ Chrome to do slave work, I realised that, since it has the Web Audio API, I could use it to generate and manipulate sound. As part of learning how to use the Web Audio API, I wrote a JavaScript program that generates wind ambience (this code is used on this very page). A "system" was created for manipulating audio over time, to order, and outputting it as a file on the hard drive. The BASIC->Chrome framework was then used to control this system from inside a custom BASIC program.

Since then, I have written many other programs that use that Chrome framework to perform specific tasks - eg. panning a sound, manipulating a graphic, synthesising audio, doing geometry calculations, etc.

Ever since my first speech in September 2016, I had wanted a special intro for the speech videos. I came up with the idea of showing the country (where the speech took place) being "drawn" by a lightning bolt. This would be simple to code, if I had the outline of the country. So I wrote code for discerning the outline of a solid shape, as a long sequence of coordinates. This was very tedious to code, and to be honest the program still doesn't work perfectly and sometimes needs manual "help" - but it works. I combined this with the evolving lightning code and the Chrome sound manipulation code to produce what is now seen at the start of every speech video, from Oslo onwards. One amusing element in this sequence is the crackling buzzing sound that mutates with the lightning bolt; this originated as a Lloyd Cole song, which I heavily distorted. It is then panned sequentially by Chrome to match the screen position of the lighting bolt.

Something that had always aggravated me about filming videos, from the very start of the channel, was the 30-minute recording limit on cameras. It meant I could never fully relax, always having to be aware that the camera might stop recording. This is especially bad with the RX100 because it doesn't even bleep when it stops, and several times I have kept talking for a few minutes before realising. Even without these mishaps, it seemed a silly limitation to be tolerating given that making videos is my job. First, I tried to solve this problem by buying a camcorder, but unfortunately it just couldn't handle the low-light conditions in which my videos are filmed. So I looked for a device that would record footage from the RX100 camera and circumvent the time-limit: an external recorder. I found the Hauppauge Rocket. At first I was delighted with it, but soon realised that it brought technical problems of its own which complicated my work. Ultimately, I only used it when I could be bothered solving the problems that came with it, which was very rarely. I know that this video was recorded with.

In July I treated myself to a proper vocoder - the Electro-Harmonix v256. I haven't really used it yet for anything, but doubtless it will come in handy in future.

In August, I wrote a little program to automate the final stage of making multi-part videos. Each part is rendered out as an mp4, and this program stitches them together. It also counts up the duration and outputs a list of where each part will begin in the finished video - a list I then paste into the video's description on YouTube.

Towards the end of the year, I wrote a system to simplify the logistics of organising Millenniyule 2017, which involved 66 slots being allocated to guests on a mixture of preference and first-come-first-served basis. An HTML form was set up on this website. Guest preferences were conveyed by JavaScript to PHP, then downloaded and handled by a BASIC program which did the allocating and automatic emailing. This was my first foray into PHP and it would turn out to be useful in 2018.