Monday, 19 September 2016

ESP8266 re-programmer PCB

A little while back we updated some firmware on one of our ESP8266 modules. Well now we've only gone and bought about twenty of them off ebay - and they all need updating!

While it's possible to do this with a usb-to-serial device and pull wires out and plug them in again in a little breadboard, it's also pretty time-consuming. And a bit fiddly.

And when real-life work gets in the way of nerding about, time-consuming and fiddly means more time working and less time nerding. Which means we needed something to make re-programming all these ESP8266 modules that little bit quicker. So we came up with this:

It's a simple re-programming board for the ESP8266 module. Just plug the module into the twin DIP socket, hold down the "program" button and hit reset. The wifi module boots up into "ready-to-program" mode and you can just dump your new firmware to it at the push of a button. No more messing about swapping wires and trying to time everything just right!

Until we got this little thing made, things were getting a little bit messy and out of control at work!

But with this programmer, we managed to re-flash about 20 wifi modules in less than 40 minutes (it takes 90 seconds or more to download the firmware onto the device, so this was pretty good going!) and get a couple of panels of PCBs ready to stick them onto...

Here's the PCB layout should you want to make one yourself.

Just don't forget to put a 1K pull-up resistor on the RESET and GPIO1 lines.
We've drawn the board with both 1206-sized pads (for all you surface mount junkies out there) as well as holes for through-hole resistors too.

The pin layout on the right matches the pin layout for an Arduino Pro mini, which in turn is pin-to-pin for most USB-to-serial modules commonly available on eBay and various other online electronics outlets.

We've found that for programming the wifi module, you don't even need a dedicated power supply - the 100mA from the USB port should be enough . If you do want to power the wifi modules externally (perhaps for testing after flashing the new firmware) simply remove the Vcc wire from your usb-to-serial connector and connect a battery (or other power supply) to the Vcc/Gnd pins.

Saturday, 10 September 2016

OSC UDP TCP/IP and choosing the right technology

Back in the real world, we occasionally have to venture out from behind the nerd bench and making stuff for fun and enter into the commercial arena - it's what some of us have to do in order to pay the rent. It's also what some of us do for the sake of "staying relevant", keeping up with different technologies, and applying it in a real-world application.

The wealth of different technologies for controlling hardware has exploded in recent years. Maybe it's the IoT (internet of things) "branding" or maybe it's platforms like Arduino and Raspberry Pi that have led to a renewed interest in making things talk to each other - whatever it is, there's never been a better time for making hardware and integrating it with cool technology like smartphones, tablets, handsets and PC computers.

But with all this choice comes responsibility.
A responsibility to choose the most appropriate technology for the job in hand. Sometimes "most appropriate" simply means "one I know how to use". And for many things, that's fine. But something that has become apparent, as the IoT revolution has exploded, is that more and more people are selling their services as industrial/commercial "specialists" - and many haven't the first idea what they are doing!

Over the years we've built layers of abstraction into technology. A bit like cars. They're simpler to use and easier to work on (since everything comes in a single, encapsulated module that can just be swapped out). But with this ease of use comes a cost - when things go wrong, it's not so easy to fix! With cars you take it to the garage; a part may no longer be "fixable" as it might have been 30 years or more ago, so it's just thrown out and a replacement part fitted. That is the cost of simplification.
With technology the same thing seems to be happening. More and more, projects are built from encapsulated modules of code (or libraries if you like) which make everything nice and simple. Until it goes wrong. But, unlike a poorly performing car, it's not always so easy to throw out the "broken" module and replace it with a new one - sometimes it's not immediately obvious which module is broken. Sometimes there may not be a replacement library/module available. What then?

But worse than this, there seems to be a fundamental mis-understanding of which technology to use and when. At best it's a lack of awareness. A gap in knowledge or understanding of how things work "under the hood" meaning that the best-fitting solution is either not-known or overlooked. At worst, it's a lack of what we might call "giving a shit".

An example springs to mind - it's difficult to be too specific, since this is a real-life, commercial ongoing project. But the outline is something like this:

We've been asked to build some hardware that a user interacts with. Our hardware connects to a central controller which, when certain combinations of events occur, sends a message to a video player to play a specific sequence. During this time our hardware should remain silent. When the video has finished playing, our controller receives a message from the video player to say it's ready to accept incoming messages again.

That's the jist of things. There's a bit more to it, but those are the basics. We've been asked to send our messages to the video player using the OSC protocol. On the face of things, it seems ok. But when we look a little further into things, questions start to raise about whether or not this would be the most appropriate technology for the job....

Now first up, let's just say that TouchOSC is a great product.

It works across multiple platforms, iOS/Android etc. The interfaces look great and it has an inbuilt editor that lets you put sliders, dials, buttons and the like together really easily. It's dead easy to use and the end result looks pretty good (a bit same-y; once you've seen the TouchOSC app all interfaces tend to look the same, but it's very good for what it does).

But it our case, is it really the most suitable solution?
Firstly - and this is our biggest bugbear - TouchOSC uses UDP to send messages. It's a fire-and-forget protocol. You press a button, a message gets fired, and you just hope that it gets there.

UDP has it's uses. It's relatively fast (compared with TCP/IP). It's great for realtime gaming. And OSC has proven very popular amongst musicians and even lighting engineers. There's an old joke that nerds often tell to explain UDP:

"I'd tell you a joke about UDP. but I'm not sure if you'd get it".

Understand the joke and you pretty much understand UDP.
For some applications, UDP is a great fit. For games, for example. If you're continually updating a number of other players across a network of your character's location, UDP is perfect. It's fast, lightweight and does the job. It doesn't just address one target - it can be broadcast across an entire network easily. TCP/IP would be a poor substitute, with it's latency and error-checking and resend-on-fail, only being delivered to one recipient at a time, all adding to the time it takes to update some game-world co-ordinates. Repeat that for multiple players sharing a game, and you've got a pretty slow, unresponsive game. Compared to TCP/IP then UDP is fast.

For realtime audio, UDP is also pretty good - for the same reasons.
When you're whirling an onscreen rotary encoder, and the app is blarting out a stream of values as the encoder position changes, you want to be able to send a high volume of messages quickly. You want the realtime feedback (the change in volume) to be almost instantaneous, not laggy.

For UDP's strengths as a high-volume, high-speed transport layer, it also has one major weakness: you never know if the message was received. It's a bit like shouting into the darkness. There's no specific end-point, you just put a message onto a port number and anyone who is listening gets the message.

In contrast, sending data via TCP/IP is a bit more akin to using Royal Mail's parcel tracking service. It's sent to a specific address. It's slower. When the parcel arrives, confirmation is requested and sent back to the sender to acknowledge everything arrived as it should. Sending data via TCP/IP has an "overhead" but at least you know your data has reached its destination.

So why is using OSC and UDP such a pain for our particular application?

It's important to understand that we're not saying UDP sucks and TCP/IP is great. Just that there are better reasons for choosing one over another to match the requirements of the project.

For realtime gaming, UDP is ideal. If a packet of data doesn't arrive at the destination (if you should into the darkness but your voice is drowned out by a passing drunk singing "Danny Boy" walking past your door at 2am in the morning) it doesn't really matter. Because any second now, there'll be another packet of data coming along. And another. And at the receiving end, the missed data doesn't really cause a problem. If, for example, you're updating an on-screen avatar, whose location is being updated many many times a second, the odd dropped set of co-ordinates is hardly noticeable. The on-screen character might jump four pixels instead of two, or it might be so far in the distance as the jump in game-world position is unnoticeable.

Similarly if you're using an OSC controller over UDP to, say, make some lights go up and down, the odd missed packet of data doesn't really matter. If you turn a virtual rotary encoder and the lights don't immediately respond, because you're looking at them for feedback, you know to turn the wheel a little bit more - a whole heap of data gets blasted towards the lighting controller and it takes just one packet of data to update the light.

For high volume, high speed data, UDP is very useful.

However, for two-way communication, with long delays between messages, it's not quite so robust. In our situation, we're having to use a fire-and-forget "shouting into the darkness" communication method when what we really need to know is that our messages have been received (and, similarly, it's really important that we don't miss any messages coming back).

For this particular project, TCP/IP would be a much better communications protocol. We're sending low volume messages. Latency isn't a problem - if the response time between a user interacting with our hardware and the video starting to play was as much as a few hundred milliseconds, the end result would be no different!

But not being able to guarantee that messages are received could cause all kinds of headaches. Here's how the mode of operation should go.

  • User interacts with hardware
  • Message sent to video player
  • Hardware stops responding to user while video plays
  • Video player sends message after video ends
  • Hardware responds to user input again
  • Repeat ad infinitum

 It only takes one dropped message to cause the whole system to appear to have stopped working. If we reach a trigger point and our device firmware says "tell the video to play and stop responding to user input until we get a message back" what happens if our message to the video player fails to arrive?

Our hardware is now in a "suspended state" and no return message is ever going to be received to turn it back on (since the video player hasn't been told to play a video and thus send an "end of video" message back.)

Or maybe the video player does get the "play video" message. Everything is fine. We stop responding to user input, while the video is playing, as required. But what if the "end of video" message coming back is lost? Our hardware never wakes up again!

In either scenario, we eventually end up with hardware that appears to have stopped working. And all for the sake of choosing one communications protocol over another. When queried, we were told that "both systems have worked well for us and our OSC libraries use UDP so that's what we're using".

Of course, it would be possible to implement a feedback loop, from devices to video player and back again - send a message and if no response is received within a certain timeframe, resend it. But then acknowledgements coming back from the video player are broadcast across the entire network, to every connected device. So how do we know which acknowledgement is for which device? By implementing some kind of address system? So... pretty much recreating the TCP/IP protocol. But by shouting. And resending a lot of data. Suddenly our fast, zippy UDP transport layer is bogged down with noise and multiple packets of data......

This isn't an anti UDP rant or an anti-OCS moan, far from it.
But it's just to highlight - because there seem to be an awful lot of "computer people" out there unaware of what's going on under the hood - that sometimes pre-built libraries and handy, encapsulated modules of code, are not always the "best" fit.

Sometimes you actually need to understand what is going on. Especially if you're working in a commercial environment and billing a client for your time. Simply falling back on someone else's code and assuming everything is going to be alright because it worked for someone else in the past isn't good enough. Because they may have used it under a completely different set of circumstances, to achieve an entirely different result.

Please people. from one bunch of nerds to another, be mindful of what you're doing, why you're doing it, and choose the most appropriate technology - not just the quickest/cheapest/easiest to prototype with. That way we can all build an internet-of-things to be proud of, not just a buggers-muddle of poorly-designed devices all fighting for our bandwidth!

Friday, 2 September 2016

Upgrading firmware in ESP8266 wifi modules

It's been a while since we played around with our wifi modules but in recent months we've been inundated with Arduino projects in the real world; the latest involves running an AVR off a lipo battery (actually, it's a li-ion but you get the idea) to interface with an ESP8266 wifi module.

So far so good.
Except our Arduino Pro Mini modules are set up to run at 8Mhz from a 3.3V supply (an atmega328 is not guaranteed to be stable at above somewhere around 12Mhz on a 3.3V supply). And with the internal oscillator at 8Mhz on an AVR, the error rate is far too high for a 115,200bps baud rate.

Which  means either
a) change the baud rate on the ESP8266 or
b) change the controller to one that can run faster at lower voltages

Obviously a faster mcu running off a battery will drain it more quickly, so we're looking to reduce the baud rate if possible. Except our ESP8266 modules are running some pretty old firmware - and they don't support changing the baud rate.

The datasheet says we should be able to issue the AT+CIOBAUD command to query/set the baud rate. Each time we tried this (and even tried AT+CIPBAUD in case it was a typo!) the result was ERROR.

It looked like we were stuck with 115200bps.
But after a bit of digging around on the intertubes, we found this website:

And it talks through re-flashing the firmware on the wifi modules using a nice easy-to-use Windows interface (rather than the nasty run-it-and-hope Python scripts a lot of other sites recommend). It basically involves running the ESP8266Flasher.exe and downloading the v0.9.2.2.ATFirmware.bin - both of which we've bundled up and put here.

Firstly, wire up the module with the GPIO1 line pulled to ground (normally we leave this line floating). Then start the executable

Select the .bin file (we're using version but other versions may be available on the interweb)

Hit the  "download" button. It takes about 2-3 minutes (for ever in computer time) to re-flash the wifi module. Booting up, however, returned nothing but garbage

Normally the ESP8266 spits out some gobbledegook when it boots up. But that's because the boot-up log is always spit out at 74880bps before the module returns to the baud rate set by the AT+CIOBAUD command.

But we expected at least some kind of "OK" or "ready" message. Instead we got nothing. Trying to enter AT commands resulted in more garbage. So out of curiosity, we set out serial monitor baud rate to 9600 and tried again. This time, the response was much more meaningful. Not only could we see that we were running newer firmware, but we also received valid responses from the AT+CIOBAUD? command

As it turns out, this new firmware defaults to 9600bps, which is what we were looking for. So there's no need for us to update the baud rate, once the firmware has updated. And - more importantly - we're able now to communicate with the wifi modules using an Arduino running at 8Mhz of it's own internal oscillator, powered by a lipo battery.

It's only a small victory. But a success all the same!

Friday, 26 August 2016

A bit more on PWM IR and Arduino

We've had a few questions about our PWM IR data transfer idea. And most of them revolve around "why send an end of message marker at the start and the end of the data?"

Some people went so far as to suggest that only an end marker would be necessary, We disagree with this. Some people asked why we weren't using a start marker and an end of message marker. This makes some sense. But here's why we did it the way we did:

Firstly, we need a marker at the start AND end of the message. Otherwise while the IR receiver is exposed to sunlight, it might start creating extraneous bits and bytes (unlikely, since we have a defined PWM width we listen out for and ignore anything outside of these widths, but still possible).

At the start of a message, we clear down our byte value buffer so we know that what follows is being added to a "clean" variable, and not tacked onto the end of some random noise.

At the end of a message, we parse the contents of the bits and bytes received. We could have used a different start pulse, but our pulse widths are already getting quite wide. A bit value zero is up to 4ms long, a bit value of one is up to 12ms long, and our EOM marker is up to 20ms long. To make another, clearly definable pulse wdth, we'd probably have to go up to 40ms long. That's as long as it would take to send x10 zero values!

There's nothing wrong with creating a different start-of-message pulse width, but we just felt it was unnecessary. At the start of a message, if we sent an end-of-message width, we'll parse any random noise that's been received and expect it to fail (since all messages have a checksum byte at the end). After parsing a message we reset the internal buffers anyway. So there's no reason why we can't use the same pulse width at both the start and end of each message.

So that's what we did.
Hope that clears that up!

Wednesday, 24 August 2016

Sending/receiving IR data via PWM on an Arduino

Many years ago we did some low-level data exchange using radio modules and Manchester encoding. In a recent project, for real-life, work work, we were tasked with sending simple packets of data via IR. Naturally an IR 38khz receiver was tried and worked quite well.

Every now and again, the received data sort-of burped, so we sent each byte along with it's compliment value (so a value, say of 0xF0 would always be followed by a value of 0x0F or b11110000 would be followed by b00001111). The idea being that for every pair of 8-bit values, the XOR sum would always be exactly 0xFF.

We found that if we sent a single byte value, followed by it's compliment, and repeated this "packet" of data twice in quick succession, even if one packet failed validation, the second would always "get through".

Everything was great - for one-way communication.
Once we decided we wanted two-way comms, things got a bit trickier. But only because we're working to really tight margins when it comes to space; we simply don't have enough room to mount a 38khz IR receiver and a 3mm IR LED in two different devices.

 Because of space constraints, we just didn't have the room to use both an IR receiver and LED together in each device.

With space being tight, we figured that we could use an IR reflective sensor to give us our IR receiver and LED in a single, tiny, 1206-sized package. Some IR reflective sensors are just a 3mm LED and a 3mm photo-transistor in a single package

But what we were after was one of those really tiny ones, with the same kind of pinout and independently controlled LED/phototransistor combination, but in a not-much-bigger-than-a-1206 sized package.

The idea with these is that you activate the IR LED and then look for a reflected signal (when the photo-transistor receives IR light, it can be used to pull an input low, for example). But there's nothing to say we can't use two of these, facing each other, and have one device "listening" while the other is "talking". Instead of reflected IR, we'd just capture IR light sent directly from the other device. Genius!

The only thing is our photo-transistor doesn't respond to a 38khz carrier, like the larger, single, IR sensors (which is useful if you're firing IR light across a room, to a TV and need to filter out extraneous IR light from the sun and fluorescent lights). Our photo-transistor will simply conduct as soon as it sees and IR light, from any source. But given that we'll be transmitting data in a controlled environment (and not across the room) we either have to generate our own 38khz carrier wave (and decode it on the receive end) or simply forget all about it...
Guess which approach we took?

So doing away with the 38khz carrier, we simply have a receiver that pulls an input pin low when it can see IR light. We decided to try simple PWM to send data into the sensor.

The basic approach is that whenever a device sees a high-to-low transition on the input pin, we reset a timer/counter. This is because a high-to-low input signal means the IR led has just gone from low-to-high (i.e. just turned on). Then, when the input goes low-to-high, we know that the LED has just turned off (since we're using pull-up resistors on the input and using the photo-transistor to pull to ground when it sees IR light).

Following a low-to-high input, we look at the width of the pulse (in milliseconds). A simple look-up goes like this:

1-3 ms = bit value zero
6-10 ms = bit value one
15-20 ms = start/end of message marker

any other duration, ignore.

Whenever we get a "wide" pulse, we've either just started, or just ended, a message. Irrespective of which, look at the previously received bits of data and parse them. At a "start" pulse, we'd expect their to be no previous data, so we can skip parsing. After an "end" pulse, we should have a load of bits of data to parse. After a wide pulse, we reset the binary bit counter and set the incoming message buffer to blank again.

It's simple.
It's crude.
It works surprisingly well.

The only thing is, we want to make sure we're not parsing gibberish data. Which means we need some kind of checksum, to validate all the previously received data. We figured that the easiest method would be to send data in 3-byte packets, with the fourth byte acting as the checksum.

On receiving data, we'd recreate the checksum value from the first three bytes and compare it to the fourth. If the fourth byte and the checksum byte match, we accept the data and decide what to do based on the three-byte message.

The send routine uses simple delay routines to send bursts of IR light

int ir_pin = 2;
long ir_val;

void sendIRValue(){

     // start of message marker
     for(int i=0; i<32; i++){
          int j=ir_val & 0x01;


          ir_val = ir_val >> 1;

     // end of message marker

void setup() {
     // put your setup code here, to run once:

     // light the LED for a couple of seconds just so
     // we can see if it's working

void loop() {
     // put your main code here, to run repeatedly:

     long k = random(0,256);
     long j = random(0,256);
     long     i = random(0,256);
     long h = k ^ j;
     h = h ^ i;

     Serial.print(F("sending values - k:"));
     Serial.print(F(" j:"));
     Serial.print(F(" i:"));
     Serial.print(F(" checksum:"));
     k = k << 24;     
     j = j << 16;     
     i = i << 8;

     k = k | j;
     k = k | i;
     k = k | h;
     ir_val = k;

     Serial.print(F(" sent:"));


The receive routine uses interrupts to detect when the IR photo-transistor goes either low-to-high or high-to-low

int ir_in = 2;
int led_pin = 13;

long int_vcc;
long min_vcc;
long mil_ir_start;
long mil_ir_end;
long mil_ir;

long ir_val;     // we'll just make this a 32-bit value
int ir_bit_count;

// IR high and IR low are back-to-front in the receiver.
// If we're sending IR, the sensor will be low (its an open drain collector that
// pulls an input LOW when it can see IR light) So IRLow relates to the LED being lit

void IRLow(){
     // this fires on a high-to-low transition
     // whenever the line is pulled low it's because we're receving IR light
     // so reset the timer/counter
     mil_ir_start = millis();     

void IRHigh(){
     // whenever the line floats high, its because we've just turned off the IR light
     // that is sending data to the receiver, so measure the width of the last pulse
     // and do something wisth the data if necessary
     mil_ir_end = millis();
     mil_ir = mil_ir_end - mil_ir_start;
     if(mil_ir < 11){

     // decide what to do with the pulse width
     if(mil_ir >=1 && mil_ir <=4){
          // treat this as a zero
          ir_val = ir_val << 1;
     }else if(mil_ir >=6 && mil_ir <=12){
          // treat this as a one
          ir_val = ir_val << 1;
          ir_val = ir_val|1;          
     }else if(mil_ir >=14 && mil_ir <=20){
          // this is a start/end message marker
          // if we've received a message, validate it and parse
          if(ir_val != 0){ parseMessage(); }
          // now reset everything ready for the next blast
          ir_bit_count = 0;
          ir_val = 0;


void parseMessage(){
     // a message can be up to three bytes long
     // we'll do simple XOR checksum on the fourth byte
     // and squash them all together

     int a = ir_val >> 24;
     int b = ir_val >> 16;
     int c = ir_val >> 8;
     int d = ir_val & 255;

     a = a & 0xFF;
     b = b & 0xFF;
     c = c & 0xFF;
     d = d & 0xFF;

     int k = a ^ b;
     k = k ^ c;
          // checksum success
          Serial.print(F("Received: "));
          // checksum fail
          Serial.print(F("checksum fail a:"));
          Serial.print(F(" b:"));
          Serial.print(F(" c:"));
          Serial.print(F(" d:"));
          Serial.println(F(" "));

void IRChange(){
     int b = digitalRead(ir_in);
     if(b==HIGH){ IRHigh(); } else { IRLow();}

void setup() {
     // put your setup code here, to run once:


     // create an interrupt on pin 2 (IR receiver)
     attachInterrupt(digitalPinToInterrupt(ir_in), IRChange, CHANGE);

void loop() {
     // put your main code here, to run repeatedly:

We also added a bit of debugging to ensure that we were getting accurate data. Whenever we see a single pulse of IR light, we output the duration of the pulse. This makes it easy to debug.
Received: BE2EF969
checksum fail a:9 88 7F EA
Received: F14111A1

-- interrupted here ---
checksum fail a:0 0 0 1B
Received: 64AD21E8
Received: 57377212

If we take the first line and look at the length of the pulses, we can see that 8ms = 1 and 2ms = 0. So our pattern becomes

1011 1110 0010 1110 1111 1001 0110 1001

A quick binary-to-hex conversion shows us that

1011 = B
1110 = E
0010 = 2
1110 = E
1111 = F
1001 = 9
0110 = 6
1001 = 9

And that's exactly what appeared in our serial output.
So we've got some data. So we take the first byte 0xBE and XOR it with the second byte 0x2E. Then we take the result and XOR that with the third byte 0xF9 and the result.... 0x69,

And that's what our fourth byte value is. So we know we've got a valid message. If any single bit value of the message were incorrectly received, the XOR sum of the fourth byte would be different, and we'd know to throw that message away.

To prove this, we interrupted the IR signal as it was trying to send data.
At this point (see debug output above) as soon as we received a "long" pulse to indicate the end of a message, the XOR checksum failed (because the XOR sum of the bits received did not match the value of the final byte in the message).

As we're sending 32 bits with a maximum delay of 8ms, the longest time we'd spend sending a message is 256ms (actually it'd be 288ms because we have a 16ms long pulse at the start and at the end of each message). So about a third of a second to blast data across.
Often it's much less (since a zero bit value takes only 2ms to send).

So it's slow.
And crude.
But also very robust.
At least in our, specific, controlled environment.

Which means it'll do for now!

Sunday, 21 August 2016

Google - please stop breaking the internet

Google is killing the internet.
For a while, we've argued it's advertising.
Back in the 90s it was porn.

But in 2016, it's Google.
And they need to stop it!

Google Chrome was a great browser when it launched, but it has for years been a bloated monstrosity that would make Microsoft cringe with any of their earlier IE offerings (in fact, we thought IE10/11 was a great browser - exactly what Chrome promised to be).

But it's not just their browser. Google also encourage websites to link to their hosted ajax. Which makes some websites to run slooowly, as the entire javascript framework has to load from an external site before the page even renders. And it also encourages web builders to embed Youtube videos directly into their pages. And that causes some websites to run slooowly as often the jQuery based layout requires the video to populate the frame before the rest of the site appears.

And Google also infects almost every website (that hosts Ad-sense banners) with stupid, slow-to-load adverts that cause the entire computer - not just the browser, or the current page - to lock up entirely until every last bit of shite has finishing loading.

In short, the experience of browsing a simple web page in 2016 is slower than it was in 1998. And back then we had 56kbs dial-up and GIFs. Today I've got 100mbs broadband and a computer at least x8 times more powerful than my desktop PC at the time.

The irony that we're hosting a blog on Google-owned Blogger, and embedding Google-owned Youtube videos in the pages isn't lost on us, here at Nerd Towers. And it has been very tempting to give in to their almost constant demands that we apply AdSense to the account and rake in about twenty pence per month in return for making the site unusable with adverts - but there's no point complaining about other sites spoiling their user's experience and then doing the same thing ourselves!

Now I'm no internet privacy prude.
I don't mind Google reading my Gmail emails in return for free email and targetted ads. I'm not one of the tin-hat brigade who get paranoid about companies tracking my every move on the internet. I can't help but think that it'd make for pretty boring reading. I understand that linking content from different hosts makes for a richer experience.

But it's almost like Google is actively breaking the 'net.
The latest example of this was Youtube simply not working tonight. And not just in the Chrome browser.

Tonight I tried to tune in to the Terrain Tutor's live stream to see Mel in his new studio in Stoke-on-Trent. The video refused to play. The comments and live-chat thing worked just fine, but the video frame showed a freeze-frame of a gurning face and no video (or audio) was forthcoming. The little busy logo just span in the centre of the video, where a play/pause button should have been.

That was in Chrome. In IE the video was replaced with a static white-noise animation and the message "an error occurred, please try later". Same in Edge, and Firefox.

The problem was with the Youtube site, not my computer, as the same result happened on a second PC and the video refused to play (in a Chrome browser now I come to think of it) on my phone. Then I came across this question:

load resource: net::ERR_QUIC_PROTOCOL_ERROR

so out of curiosity I entered chrome://flags/#enable-quic into my address bar.

And disabled Experimental QUIC protocol.
Suddenly Youtube started working properly.
And websites which were either unusable, or ran really, really slowly were now just running annoyingly slowing (instead of un-usably slowly). Now how this managed to affect the IE/Edge brower, I've no idea.

But with all the shit that Google is pumping out into the internet, it's making many websites literally unusable. The problem is, even if I were to boycott every Google-based product and only ever use Opera browser to view web pages, the Google Rot is in just about every web site written since 2004 - so there's no getting away from it.

Please, Google.
The internet used to be amazing.
Put it back how you found it.
If you can't make it better, just stop bloody-well breaking it.

Saturday, 20 August 2016

Soldering hall sensors to our copper strips

With the copper in place and the hall sensors prepared, it's time to stick them all down! It's quite easy (if not a little tedious) but if you're not confident with soldering, you can always do away with the solder and use conductive glue (we'll have to make a video showing how to use conductive glue as an alternative).

We prefer solder because it's instant. Conductive glue requires time to set (usually overnight) before it even works properly. With solder, as soon as it's cooled, you've got an electrically conductive, as well as a strong mechanical, joint.

Here's how we go about sticking our hall sensors down

This video only ever had an intended audience of one. We made it to give to one person who hadn't seen any of the ongoing builds, to see if they could understand and follow it. It's poorly lit, the sound is terrible, the noise from slurping coffee is distracting, and hands and arms get in the way all over the place. Sorry.