Sunday, 23 April 2017

Save some Arduino RAM when using strings with the F macro

Anyone who has ever written out debug messages to themselves while developing on Arduino will know, add too many and all your mcu RAM gets chewed up pretty quickly.

In "production" code, it's quite common to flash an LED to indicate what's going on, but that gets pretty tedious to debug when you're making lots of changes to your code as you develop, so it becomes common to little complex routines will little Serial.Println statements, to show where in the logic control you're up to.

A you might write something like

while(something){
  Serial.println("Here's what's going on");
}

after a few of these (ok, maybe like a couple of dozen or more) you'll find your RAM usage creeping up. Debugging code that uses wasteful libraries means either re-writing someone else's code (negating the benefits of a library-based development system) or reducing the message length (until you're doing little more than an alpha-numeric equivalent to flashing an LED).

More experienced users might write something like

while(something){
  Serial.println(F("Here's what's going on"));
}


At first the difference is difficult to spot. But that all important F macro (which isn't particularly well documented in Arduino help files) makes a massive difference. What that does is write your string message to program ROM rather than fill your RAM with pointers to character arrays that the Arduino string class uses.

Replace all instances of "my string" with F("my string") and you'll find your RAM usage plummets (while your program ROM size increases by roughly the same amount as you've saved with RAM).

We recently played about with the excellent nokia5110-compatible LCD library and built a rotary-encoder based menu system (for the light-up guitar I promised Keith). There are lots of strings of text used, and - sure enough - after coding a few menus, our RAM usage was on the up



While it seems trivial to add the F macro jut before each of our strings, in this particular case, it wouldn't actually work. See, our LcdString function accepts not an Arduino-type string object, but a pointer to a character array.

So if we tried to write LcdString(F(" string "));
it simply wouldn't work (the compile returns a data type mismatch error.

The answer is a quick-and-dirty function into which we can pass a string and return a character array, which we can then pass into our LcdString function.


Now we can write our string calls using the F-macro (to push the strings into program ROM space and free up RAM) but still pass them as character arrays into the functions that prefer character arrays over the Arduino string class.


In our menu test, we managed to conserve over 360 bytes just by implementing the F-macro using a string2char function. Given the atmega328 has 2kb of RAM but a massive 32kB of ROM, wherever possible we try to push our strings into ROM.

We managed to reduce our RAM usage by over a third (34%) for a modest 2% increase in program space. Given there's more to this project than just the menu system, we'd take any chance to reclaim back over 17% of the total available RAM for use in the rest of the program!

So next time you're getting close to using up all your RAM, find all those little debug messages and wrap them in the F-macro. And if you're passing strings into other functions, you can still use it and simply pass your F-strings into the string2char function if the function prefers a character array.


Saturday, 22 April 2017

Making a light-up-guitar for Keith

Having spent a few days in Dublin, I got chatting to my brother-in-law Keith. He was asking about the guitar project we worked on a while back and we swapped tales about learning (or failing to learn) the pentatonic scales and target notes properly.

When I got back I promised I'd build him a guitar to demonstrate how it all worked. Which was grand, except since moving into the workshop bungalow, I've not been able to find the massive PCBs to connect up the WS2812B RGB LEDs.

Not wanting to go back on promise meant only one thing - hand-solderingall 96 of the little buggers with a pair of tweezers and some thin-gauge wire. I thought I'd left wire-wrapping behind in the 90s! Luckily there were a couple of laser-cut fingerboards left over from building the last few guitars about a year ago, so I set about super-gluing some LEDs to the underside.


I got the idea from messing about with the electronic board game; having PCBs built for them would be prohibitively expensive, so we swapped the circuit boards for strips of copper tape and hand-building with loose components. By placing the LEDs the right way around, I figured I could connect the data_in and data_out pins together easily, then just join all the power and ground pins to two strips of copper tape on each fret.


It took nearly two days of positioning, soldering, testing, debugging, re-soldering - in between other work - but the end result was quite impressive


Connections between each strip of lights was made up the centre of the fingerboard so the outer sides could be glued to the guitar neck; since the truss rod is down the middle of the neck we wouldn't be gluing the centre of the fingerboard anyway.



A quick rainbow sketch on an Arduino with the FastLED library and we had a rather attractive display. Never mind lighting up frets, learning scales and showing how to play the guitar - I quite fancy another one of these with just the rainbow pattern. I might not be  the best performer at the next Open Mic Night at the Pebbles, but I'll certainly be the brightest!

Thursday, 20 April 2017

Unity, raycasting and line of sight between objects

We're putting together a simple 2D/top-down game that makes extensive use of "line-of-sight" rules as players move around the game world. There are a few ways you can check for line-of-sight but they almost always involve drawing a line between two points, then seeing which objects (if any) intersect this line.

If we were coding this in any other language for any other system, that's probably how we'd do it anyway; create an equation to describe the line between the two points, then "walk along" the length of the line, one pixel/unit at a time, checking to see if the x/y position of any other object in the gameworld is close enough to the line to be considered intersecting with it.

Unity provides this functionality already with its Physics.Raycast function.

Provide the function with a start point vector and a direction vector and it imagines an infinitely long line (the "ray") from the origin, and returns the first object that the ray collides with (if any). There's also Physics.RaycastAll which does the same thing, but returns an array of all objects hit by the ray.

We made use of the RaycastAll function in our line-of-sight checks (although we're building a 2D game, the same principles of 3D development still exist, we just treat everything as if it were all on the same Z-plane). We put three moving objects into our game world, with one of them hidden behind a wall. We then updated the position of each object and ran our line-of-sight checks from the moving object to all other objects in the world. The checks involved raycasting from the movign object to each other object (in turn) and checking the array of collisions.

If the array was empty, there were no detected obstacles along the line, and so we said that there was a line-of-sight between the two (in future development we'll have to include things like facing and field-of-vision and so on, but for now we're just trying to decide if an obstacle exists between two points). If any obstacle was returned in the array, we said that no line-of-sight existed between the two objects.



On the face of it, a simple Raycast function might do the job, as we're only interested - at this stage - in the binary option of "is there an obstacle between these two points". But we wanted to use the RaycastAll function to return ALL objects so that in future we might be able to assign "visibility" to different obstacles. Some obstacles may, for example, be see-through, but we still want them to act as an obstacle for purposes other than viewing. A classic example might be a glass window: you can see through it but it also acts as a physical barrier.

So we don't just want our line-of-sight function to return false if any old obstacle exists between two points - we want to inspect each obstacle type between the points and decide whether or not to include them in our line-of-sight check. So instead of Physics.Raycast, we used Physics.RaycastAll.

Everything seemed to be working just fine for a while; our hidden object remained hidden and the visible object revealed itself in good time. The function correctly identified whether or not there was a line-of-sight between all of the objects. Then something funny happened - despite there being a pefectly clear run between our first two objects, the LOS  function started returning false



Even more peculiarly, sometimes the function returned true (is there a line of sight between these two objects) and sometimes false, depending on which object we used as the source and which was the destination. Yet as we hadn't yet introduced rotation or facing into our function, it didn't make sense that an obstacle was found if we went from A to B but none were found if we went from B to A.

After much puzzling and re-reading the Unity documentation, we eventually worked out the problem. Our ray was continuing beyond the object being tested. So although we thought were asking "are there any obstacles along a ray between these two points?" the function was actually returning "are there any objects along an infinitely long ray, starting at point A and continuing in the direction towards point B?"



Of course, as soon as we moved an object so that there was a wall behind it, the function found the wall. The ray passed through the second object, struck the wall behind and said "yes, I found an obstacle along that ray".

What we needed to do was limit the length of the ray;
The RaycastAll function has an overload which allows you to enter a start point, a direction and a magnitude (maximum length of the ray). We created our ray be subtracting the gameworld co-ordinates of the source object from the co-ordinates of the destination object. This creates a vector describing the path between the two objects. We use this vector as our ray. Having created the ray, we then used the magnitude of the direction vector as the length of the ray.

As soon as we limited the length of the ray to match the length of the vector describing the direction from one object to the other,the function worked as expected, both "forwards" and "backwards" (i.e. it didn't match which object we used as the source and which was the destination).

bool hasLOS(GameObject source, GameObject dest){
   // firstly cast a ray between the two objects and see if there are any
   // obstacles inbetween (some obstacles have "partial visibility" in which
   // case we may or may not want to include as a "hit")

   RaycastHit[] hits;
   bool obj_hit = false;

   Vector3 dir = dest.transform.position - source.transform.position;
   Ray ry = new Ray ();
   ry.origin = source.transform.position;
   ry.direction = dir;

   hits = Physics.RaycastAll (ry, dir.magnitude);
   Debug.DrawRay (source.transform.position, dir, Color.cyan, 4.0f);

   foreach(RaycastHit hit in hits){
      // here we could look at an attached script (if one exists) on the object and
      // decide whether or not this should actually constitute a hit
      Debug.Log("LOS test hit from "+source.transform.position+" to "+dest.transform.position+" = "+hit.transform.parent.gameObject.name);
      obj_hit = true;
   }

   return(!obj_hit);
}


Within the foreach loop we can put some further testing to decide whether or not the obstacle has an effect. So in the case of firing a bullet at a target which is on the other side of a glass wall, we could call the function and ignore the glass object when testing for line of sight (can we see the object behind the glass) but include the object as an obstacle when using the same function to decide if, say, a bullet were to be fired from one object at another.

The same result could be achieved using trigonometry (lots of tan/cos functions) but Unity does provide lots of nice, easy, helper functions, such as Raycast and RaycastAll. Thanks Unity!


Sunday, 16 April 2017

Serial UART hub/network with master/slave devices

We recently had cause to build a simple serial/UART "network" of slave devices. We had a single "controller" device (which receives data from a PC and broadcasts it along the bus) and a number of similar "slave" type devices.

Normally, when it comes to multiple devices along a bus, we'd be thinking of either SPI (broadcast the message to all devices with an identifier in the message to which the appropriate devices repond) or I2C (each device could have its own unique hardware ID to which we address the messages).

But for a recent project we were asked if we could create a serial/UART bus. At first it seemed quite straight forward - simply tie all the TX lines of the "slaves" together and connect to the "RX" of the "master" and invert; tie all the RX lines of the "slaves" to each other and connect them to the "TX" of the master device.

The basic idea is that the master would broadcast a message to all devices, including a device ID in the message. When any device receives an end-of-message marker, it looks at the device ID. If the message is not intended for that device, it simply ignores it.

The theory works great.
Sometimes in hardware it works just fine.
But sometimes it goes horribly wrong.

Now of course if two devices try talking at once, you just get garbled nonsense (so at the end of each message we include a simple XOR sum to check if a message is valid). So this set-up only works if you can be sure that only one device is going to  try to use the bus at any one time.

But sometimes we were getting devices resetting. Not all of them, and not all at the same time. Just some devices, sometimes. Which in turn indicates that one device is trying to drive a line high, while another is trying to drive it low. When this happens, we're effectively creating a dead-short between power and ground; so it's no wonder that the devices are resetting!


By simply putting a diode on each of the TX lines and a pull-up resistor on the "master" RX line we can overcome this problem easily. Now, when a device tries to drive a TX line high, the  current can't get through the diode. But the pull-up resistor lets the TX line (connected to the RX of the master) float high. So the end result is the same.

But if another device drives the TX line low, it's enough to overcome the pull-up resistor, so the entire TX bus goes low (and the master RX line goes low). If one device tries to drive the TX line high and another low, the TX line goes low. The data at the other end might get garbled, but the important thing is that we don't get slave devices resetting.

It's basically the same idea used with SPI communications - drive a line low, release it to let it float high. But if we can't guarantee that our slave devices aren't going to try to drive the TX line high, the diode simply blocks that behaviour. When no devices are pulling the TX line low, it floats high anyway (which is the idle state of a UART transmitter anyway).

Simple.
But a trick worth knowing!

Saturday, 15 April 2017

Creating primitives and textures in Unity

I love Unity. I love that you can write code and compile it to multiple platforms. I love that you can "hit up" the Asset Store and have a game working in a couple of hours. At least, a simple game.

But one of the things I've always fancied doing with Unity was have it load levels (from a web server perhaps) and create rooms and playing areas dynamically. We've played about with doing just that using pre-bought assets (it's not as easy as you think, if you're working on a grid-based system, since most assets have their origin in the dead centre, not on one corner!)

So as a bit of an experiment, we played about with creating a map "plane" from primitives, onto which we'll dynamically load textures. So at the start of the "game" there's nothing on screen - then a few script calls and we'll create some primitive shapes (after all, most floors and walls are not much more than simple rectangles) and apply some textures.

It's worth noting that we're creating a 2D top-down type map, even though we're using 3D shapes (the 3d shapes allow us to work with complex principles such as rotation and line-of-sight later on down the line).


We've set up our camera as orthographic and have it pointing straight down. We also added a directional light and made this a child of the camera - effectively following it as it moves over the map. We also created a "gameWorld" empty gameobject just to hold all our dynamically generated content, in case we need to turn the global world on/off  for some reason in the future.

Now a couple of scripts to actually generate our primitive shapes and to apply textures to them. We're working on a grid-based map and each object we create in our game-world will be placed from the bottom-left corner:



But when you create a gameobject in Unity, the origin of the object is smack-bang in the centre. Which makes getting everything to line up in a grid a bit of a pain (especially if the objects are not perfectly square).


So whenever we create an object that we want to align on our grid, we "wrap it up" inside an empty gameobject and set the local x/y co-ordinates to half the height/width of the object. This way we can place our floors and walls without having to keep applying an offset to get the origin somewhere near the bottom-left corner.


With the gameobject in worldspace, placed at 0,0 half of the floor tile is beyond our 0,0 position (ok, it's only a quarter section, but you get the idea)



By placing the tile inside an empty game object, we can place the parent at 0,0 and offset the child by half the height/width and get our tile to appear where we want it "in world space".

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class object_creator : MonoBehaviour {

    Material mat;
    Shader shdr;   

    // if you're using one-square to one-unity-unit keep track of it here
    // (in earlier versions, a 0.5 scaled plane - 5Uunits - represented a board
    // of 8x8 grid, in which case square size would be 5/8 = 0.625)
    private float square_size = 1f;

    // Use this for initialization
    void Start () {
       
    }

    void Awake(){
        shdr = Shader.Find ("Sprites/Default");
        if (shdr) {
            mat = new Material (shdr);
        } else {
            Debug.Log ("wtf");
        }
    }

    // Update is called once per frame
    void Update () {
       
    }

    public GameObject createObject(string objName, GameObject objParent, float x, float y, float z, float size_x, float size_y, float size_height){

        // creates a primitive (cube) wrapped inside an empty game object
        // which is placed at the gameworld position x,y       

        // the position of the (empty) game object is such that the origin is in the
        // bottom-left corner (not the centre as is usual with gameobjects)
        GameObject piece = new GameObject();
        piece.name = objName;
        piece.transform.parent = objParent.transform;
        piece.transform.localPosition = new Vector3 (x, z, y);
        piece.transform.Translate(new Vector3(-square_size/2, 0, -square_size/2));

        GameObject cube = GameObject.CreatePrimitive(PrimitiveType.Cube);
        cube.transform.parent = piece.transform;
        cube.transform.localPosition = new Vector3 (size_x/2, 0, size_y/2);
        cube.transform.localScale = new Vector3 (size_x, size_height, size_y);

        return(piece);
    }

    public void setTexture(GameObject o, string imageName){
        // get the child object in "o" with the name "Cube"
        // (this is the actual shape, the game object is the container)
        GameObject p = o.transform.FindChild("Cube").gameObject;
        // download the texture for this object
        string url="http://your_url/" + imageName + ".png";        
        StartCoroutine (downloadImage(url, p));
    }

    IEnumerator downloadImage(string url, GameObject o){
        if (url.Length > 0) {
            Debug.Log ("loading from " + url);
            WWW www = new WWW (url);
            yield return www;
            Texture2D tex = new Texture2D (www.texture.width, www.texture.height);
            www.LoadImageIntoTexture(tex);
            o.GetComponent<Renderer> ().material = mat;
            o.GetComponent<Renderer> ().material.mainTexture = tex;
            o.GetComponent<Renderer> ().material.shader = shdr;           
            Debug.Log ("Texture set");
        }
    }
}


Our "object creator" script is referenced by our "game controller" script.
When any primitive is created, it needs to be given a material to apply to it; so we create a global material, based on the "sprites/default" shader. This same material can be applied to all our primitive shapes. With a material applied, we can then change the texture property of each shape, with a newly-downloaded image, if necessary.

using System.Collections;
using System.Collections.Generic;
using UnityEngine;

public class game_controller : MonoBehaviour {

   public GameObject world;
   public object_creator oc;
   
   // Use this for initialization
   void Start () {      
      GameObject o;
      o = oc.createObject ("b1", world, 0f, 0f, 0f, 8f, 8f, 0.05f);
      oc.setTexture(o,"board1");

      GameObject o2 = oc.createObject ("b2", world, 8f, 0f, 0f, 8f, 8f, 0.05f);
      oc.setTexture(o2,"board2");      
   }

   
   // Update is called once per frame
   void Update () {
      
   }
}

This script creates two "map tiles" each 8x8 units in size. It places the first at 0,0 and the second at 8,0 (immediately to the right of the first one). The script downloads the image board1.png and applies it to the first tile, and downloads the png image board2.png and applies it to the second tile.

The end result looks something like this:


When we place an object at 0,0 (in world space) it appears in the first square, from the bottom-left corner of the map. If we change the co-ordinates of the object to 3,4 in world space, it appears four squares in and five squares up from our "board origin" in the bottom-left corner of the map (remember our map starts at zero, so at x3, the object should appear on the fourth square in).


A liberal sprinkling of iTween functions and a simple download-map-data-via-xml and we're on the way to creating a top-down game which can load map layout data (and sprites/images) from a website - online map editing here we come!

Sunday, 9 April 2017

AVR atmega328 PORTC not working AVCC

One of the things I've personally struggled with, switching between Arduino and PIC is the way the Arduino IDE/language deals with digital pins. I like to use terms like PORTB.5 (the sixth pin on portB) rather than the Arduino-specific "pin 13". Of course you can use direct port access with Arduino, but the convention is to address each individual pin using the crazy sequential numbering system.



I've been working with a couple of guys on a custom "Arduino" board - in actual fact, it's little more than an ATMega328P AVR chip on a custom PCB; necessary only because we wanted to use 8 inputs, 8 outputs, SPI and a single, reversible pin for serial communication. At first we wanted to use an Arduino Pro Mini but no matter which way we tried to route things, we always ended up with pins 10-17 (yep, digital 17) as inputs with pull-up resistors enabled.

As most Arduino users know, on most Arduino boards, pin 13 has an onboard LED. Which means we can't use it as an input (since the inline resistor on the LED is pulling the input pin low despite the internal pull-up).

We also wanted to use a full-bridge rectifier to protect our little delicate AVR chips (they really don't like being powered up in reverse and can easily let out the blue magic smoke if you get the power and ground pins the wrong way around!)



So we figured that the best idea would be a custom board with an AVR atmega328, with connectors for our inputs and outputs (routed to the nearest pins on the mcu, not necessarily in the digital pin number sequence) and multiple connectors for power and ground connected not to the AVR chip, but to pins 2 and 3 of the rectifier. The output of the rectifier is then connected to the AVR chip (pin1 to ground, pin 4 to power). This gives us power sockets which can be connected without worrying about the polarity of the power source.

So everything appeared to be working just fine - the chip booted up and sent data over serial, irrespective of the polarity of the power supply. We tested all the inputs and could see that they were all working. But we were surprised to see that some of our outputs simply didn't work; the serial debug log indicated that the inputs were being read correctly, but the outputs simply failed to go high.

We'd moved some pins around, putting our inputs onto the lower numbered pins with outputs on pins 10-17 (in case we ever wanted to return to the Arduino pre-made boards and needed to use the i/o pin with an LED connected to it). But it turned out that every one of our output pins numbered above 13 was not working. That's A0 (digital pin 14) A1 (pin 15) A2 (pin 16) and A3 (pin 17).

We've used pins numbered 14 - 19 as digital i/o in the past; pins 20-21 can be set to digital inputs but not outputs, but we've had no trouble in the past making A2 light up an LED, for example. But there was something not right with our isolated AVR chip on our custom board....

It took some desoldering and a while testing for continuity before we discovered a hairline fracture in the trace connecting Vcc to the AVcc pins. It turns out that you need power connected to the AVcc pin for any of PORTC to work as digital outputs.

And it also turns out that PORTC happen to include the Arduino digital pins 14 (C0) through to 19 (C5). So without power on our AVcc pin, pins 14-19 fail to work as outputs.

A quick bit of tack-soldering and short length of wire and everything worked perfectly! So there you have it - if your digital pins 14-19 fail to work as outputs, double-check your connection between Vcc and AVcc; it's not just some useless "alternative" connection, it does actually serve a purpose!


Wednesday, 5 April 2017

Not all A3114 hall sensors are the same - who knew?

We were playing about with hall sensors again this week. We've used hall sensors a lot in the past, and had a bunch of A3114 sensors left over from previous projects. But there were only a couple left and the massive bag of left-overs was somewhere in a box in the black hole that the bungalow workshop has quickly become.

A few clicks on AliExpress later and we had some more A3114 sensors sent within just five days. We threw the lot together in a little component drawer and got on with making our project.

Hall sensors are often used as limit switches, but don't suffer from the problems that mechanical switches often do in dusty environments - namely there are no moving parts to get gunked up with dust, and no way the switch can get jammed. But when we tried using them, we got some weird results.



Some hall sensors plain simply didn't work.
Some triggered from about three inches away!
Some worked as we expected, triggering when a neodymium magnet approached from about 5mm away. And some acted less like switches and more like variable/analogue devices, with the output increasing in intensity as the magnet was moved closer.

To get to the bottom of things we created a simple hall sensor tester from a battery, an LED and a socket (into which we plugged our different hall sensors to try them out).


The first sensor in the video demonstrates how we expected the hall sensors to work; introduce a magnet and at a certain distance, the sensor acts like a switch and the LED lights up (in the video it appears to fade up quickly, but that's the camera auto-light-adjustment; in real life it switches almost instantly).

The last sensor in the video - although not immediately obvious in the film - appeared to work a tiny, tiny amount; if you looked right inside the LED, a tiny little dot of light was just about perceptible, when the magnet was right up against the sensor.

The second sensor in the video had us puzzled.
Not because it triggers from a long way away, but because it appears to have an almost-analogue-like behaviour - the intensity of the light increases/decreases as the magnet is moved towards/away from the sensor. The reason this was particularly puzzling is because A3114 sensors are supposed to have an inbuilt hysteresis.



The A3114 is supposed to have a "trigger" and "release" magnetic flux density with a "dead band" which reduces any "chatter" that might occur just at the point where the switch would normally activate (similar to the bounce in a mechanical switch).

Yet the second sensor in the video doesn't display either a trigger or a release threshold - the intensity of the LED changes in relation to the distance from the sensor. Which makes us wonder - what on earth kind of sensor is it?!


On closer inspection, we found that the sensors that worked as we expected them to were labelled 3114/515 and 3114/OH15.

The newer sensors are labelled 3114/402.
Which suggests that not all 3114 hall sensors are the same.

Who knew?