Nov 102018
 

Right now, the cluster is running happily with a Redarc BCDC-1225 solar controller, a Meanwell HEP-600C-12 acting as back-up supply, a small custom-made ATTiny24A-based power controller which manages the Meanwell charger.

The earlier purchased controller, a Powertech MP-3735 now is relegated to the function of over-discharge protection relay.  The device is many times the physical size of a VSR, and isn’t a particularly attractive device for that purpose.  I had tried it recently as a solar controller, but it’s fair to say, it’s rubbish at it.  On a good day, it struggles to keep the battery above “rock bottom” and by about 2PM, I’ll have Grafana pestering me about the battery slipping below the 12V minimum voltage threshold.

Actually, I’d dearly love t rip that Powertech controller apart and see what makes it tick (or not in this case).  It’d be an interesting study in what they did wrong to give such terrible results.

So, if I pull that out, the question is, what will prevent an over-discharge event from taking place?  First, I wish to set some criteria, namely:

  1. it must be able to sustain a continuous load of 30A
  2. it should not induce back-EMF into either the upstream supply or the downstream load when activated or activated
  3. it must disconnect before the battery reaches 10.5V (ideally it should cut off somewhere around 11-11.5V)
  4. it must not draw excessive power whilst in operation at the full load

With that in mind, I started looking at options.  One of the first places I looked was of course, Redarc.  They do have a VSR product, the VS12 which has a small relay in it, rated for 10A, so fails on (1).  I asked on their forums though, and it was suggested that for this task, a contactor, the SBI12, be used to do the actual load shedding.

Now, deep inside the heart of the SBI12 is a big electromechanical contactor.  Many moons ago, working on an electric harvester platform out at Laidley for Mulgowie Farming Company, I recall we were using these to switch the 48V supply to the traction motors in the harvester platform.  The contactors there could switch 400A and the coils were driven from a 12V 7Ah battery, which in the initial phases, were connected using spade lugs.

One day I was a little slow getting the spade lug on, so I was making-breaking-making-breaking contact.  *WHACK*… the contactor told me in no uncertain terms it was not happy with my hesitation and hit me with a nice big back-EMF spike!  I had a tingling arm for about 10 minutes.  Who knows how high that spike was… but it probably is higher than the 20V absolute maximum rating of the MIC29712s used for power regulation.  In fact, there’s a real risk they’ll happily let such a rapidly rising spike straight through to the motherboards, frying about $12000 worth of computers in the process!

Hence why I’m keen to avoid a high back-EMF.  Supposedly the SBI12 “neutralises” this … not sure how, maybe there’s a flywheel diode or MOV in there (like this), or maybe instead of just removing power in a step function, they ramp the current down over a few seconds so that the back-EMF is reduced.  So this isn’t an issue for the SBI12, but may be for other electromechanical contactors.

The other concern is the power consumption needed to keep such a beast activated.  The other factor was how much power these things need to stay actuated.  There’s an initial spike as the magnetic field ramps up and starts drawing the armature of the contactor closed, then it can drop down once contact has been made.  The figures on the SBI12 are ~600mA initially, then ~160mA when holding… give or take a bit.

I don’t expect this to be turned on frequently… my nodes currently have up-times around 172 days.  So while 600mA (7~8W at 12V nominal) is high, that’ll only be for a second at most.  Much of the current will be holding current at, let’s call it 200mA to be safe, so about 2~3W.

That 2-3W is going to be the same, whether my nodes collectively draw 10mA, 10A or 100A.

It seemed like a lot, but then I thought, what about a SSR?  You can buy a 100A DC SSR like this for a lot less money than the big contactors.  Whack a nice big heat-sink on it, and you’re set.  Well, why the heat-sink?  These things have a voltage drop and on resistance.  In the case of the Jaycar one, it’s about 350mV and the on resistance is about 7mΩ.

Suppose we were running flat chat at our predicted 30A maximum…

  • MOSFET switch voltage drop: 30A × 350mV = 10.5W
  • Ron resistance voltage drop: (30A)² × 7mΩ = 6.3W
  • Total power dissipation: 10.5W + 6.3W = 16.8W OUCH!

16.8W is basically the power of an idle compute node.  The 3W of the SBI12 isn’t looking so bad now!  But can we do better?

The function of a solid-state relay, amongst other things, is to provide electrical isolation between the control and switching components.  The two are usually galvanically isolated.  This is a feature I really don’t need, so I could reduce costs by just using a bare MOSFET.

The earlier issues I had with the body diode won’t be a problem here as there’s a definite “source” and “load”, there’ll be no current to flow out of the load back to the source to confuse some sensing circuit on the source side.  This same body diode might be an issue for dual-battery systems, as the auxiliary battery can effectively supply current to a starter motor via this body diode, but in my case, it’s strictly switching a load.

I also don’t have inductive loads on my system, so a P-channel MOSFET is an option.  One candidate for this is the Infineon AUIRFS3004-7P.  The Ron on these is supposedly in the realm of 900µΩ-1.25mΩ, and of course, being that it’s a bare MOSFET and not a SSR, there’s no voltage drop.  Thus my power dissipation at 30A is predicted to be a little over 1W.

There are others too with even smaller Ron values, but they are in teeny tiny 5mm square surface-mount packages.  The AUIRFS3004-7P looks dead-buggable, just bend up the gate pin so I can solder direct to it, and treat the others as single “pins”, then strap the sucker to a big heatsink (maybe an old PIII heatsink will do the trick).

I can either drive this MOSFET with something of my own creation, or with the aforementioned Redarc VS12.  The VS12 still does contain a (much smaller) electromechanical relay, but at 30mA (~400mW), it’s bugger all.

The question though was what else could be done?  @WIRING_SOLUTIONS suggested some units made by Victron Energy.  These do have a nice feature in that they also have over-voltage protection, and conveniently, it’s 16V, which is the maximum recommended for the MIC29712s I’m using.  They’re not badly priced, and are solid-state.

However, what’s the Ron, what’s the voltage drop?  Victron don’t know.  They tell me it’s “minimal”, but is that 100nV, 100mV, 1V?  At 30A, 100mV drop equates to 3W, on par with the SBI12.  A 500mV drop would equate to a whopping 15W!

I had a look at the suppliers for Victron Energy products, and via those, found a few other contenders such as this one by Baintech and the Projecta LVD30.  I haven’t asked about these, but again, like the Victron BatteryProtect, neither of these list a voltage drop or Ron.

There’s also this one from Jaycar, but given this is the same place that sold me the Powertech MP-3735, and sold me the original Powertech MP-3089, provided a replacement for that first one, then also replaced the replacement under RMA.  The Jaycar VSR also has practically no specs… yeah, I think I’ll pass!

Whitworths marine sell this, it might be worth looking at but the cut-out voltage is a little high, and they don’t actually give the holding current (330mA “engage” current sounds like it’s electromechanical), so no idea how much power this would dissipate either.

The power controller isn’t doing a job dissimilar to a VSR… in fact it could be repurposed as one, although I note its voltage readings seem to drift quite a lot.  I suspect this is due to the choice of 5% tolerance resistors on the voltage sensing circuit and my use of the ~1.1V internal voltage reference.  The resistors will drift a little bit, and the voltage reference can be anywhere from 1.0 to 1.2V.

Would a LM311N with good quality 1% resistors and a quality voltage reference be “better”?  Who knows?  Maybe I should try an experiment, see if I can get minimal drift out of a LM311N.  It’s either the resistors, the voltage reference, or a combination of the two that’s responsible for the power controller’s drift.

Perhaps I need to investigate which is causing the problem and see what can be done in the design to reduce it.  If I can get acceptable results, then maybe the VS12 can be dispensed with.  I may be able to do it with another ATTiny24A, or even just a simple LM311N.

Oct 272018
 

So earlier, I had mentioned that it’s really not desirable to have ARQ (automatic repeat request) on a link carrying TCP datagrams.  My comment is based on this observation:

http://sites.inka.de/bigred/devel/tcp-tcp.html

In that article, the discussion is about one TCP connection being tunnelled over another TCP connection.  Basically it comes down to the lower layer buffering and re-sending the TCP datagrams just as the upper layer gives up on hearing a reply and re-sends its own attempt.

Now, end-to-end ACKs have been done on long chains of AX.25 networks before.  It’s generally accepted to be an unreliable mechanism.  UDP for sure can benefit, but then many protocols that use UDP already do their own handling of lost messages.  CoAP for instance does its own ARQ, as does TFTP.

Gerald Wagenknecht, Markus Anwander and Torsten Braun discuss some of the impacts of this on a 802.15.4 network in their thesis “Hop-to-Hop Reliability in IP-based Wireless Sensor Networks – a Cross-Layer Approach“.  In this, they talk about a variant of TCP called TSS: TCP Support for Sensor Networks.  This was discussed at depth in a thesis by Adam Dunkels, “Towards TCP/IP for Wireless Sensor Networks“.

This latter document, was apparently the inspiration for 6LoWPAN.  Section 4.4.3 discusses the approaches to handling ARQ in TCP.  Section 9.6 goes into further detail on how ARQ might be handled elsewhere in the network.

Thankfully in our case, it’s only the network that’s constrained, the nodes themselves will be no smaller than a Raspberry Pi which would have held its own against the PC that Adam Dunkels used to write that thesis!

In short, it looks as if just routing IP packets is not going to cut it, we need to actually handle the TCP side of things as well.  As for other protocols like CoAP, I guess the answer is be patient.  The timeout settings defined in RFC-7252 are usually tuneable, and it may be desirable to back those off just a little for use over AX.25.

Oct 202018
 

So, doing some more digging here.  One question people might ask is what kind of applications would I use over this network?

Bear in mind that it’s running at 1200 baud!  If we use HTTP at all, tiny is the word!  No bloated images, and definitely no big heavy JavaScript frameworks like ReactJS, Angular, DoJo or JQuery.  You can forget watching Netflicks in 4k over this link.

HTTP really isn’t designed for low-bandwidth links, as Steve Netting demonstrated:

The page itself is bad enough, but even then, it’s loaded after a minute.  The real slow bit is the 20kB GIF.

So yeah, slow-scan television, the ability to send weather radar images over, that is something I was thinking of, but not like that!

HTTP uses pretty verbose headers:

GET /qld/forecasts/brisbane.shtml?ref=hdr HTTP/1.1
Host: www.bom.gov.au
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:62.0) Gecko/20100101 Firefox/62.0
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Language: en-AU,en-GB;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate
Referer: http://www.bom.gov.au/products/IDR664.loop.shtml
Cookie: bom_meteye_windspeed_units_knots=yes
Connection: keep-alive
Upgrade-Insecure-Requests: 1
Pragma: no-cache
Cache-Control: no-cache

HTTP/1.1 200 OK
Accept-Ranges: bytes
Content-Encoding: gzip
Content-Type: text/html; charset=UTF-8
Server: Apache
Vary: Accept-Encoding
Content-Length: 6321
Date: Sat, 20 Oct 2018 10:56:12 GMT
Connection: keep-alive

That request is 508 bytes and the response headers are 216 bytes.  It’d be inappropriate on 6LoWPAN as you’d be fragmenting that packet left right and centre in order to squeeze it into the 128-byte 802.15.4 frames.

In that video, ICMP echo requests were also demonstrated, and those weren’t bad!  Yes, a little slow, but workable.  So to me, it’s not the packet network that’s the problem, it’s just that something big like HTTP is just not appropriate for a 1200-baud radio link.

It might work on 9600 baud packet … maybe.  My Kantronics KPC3 doesn’t do 9600 baud over the air.

CoAP was designed for tight messages.  It is UDP based, so your TCP connection overhead disappears, and the “options” are encoded as individual bytes in many cases.  There are other UDP-based protocols that would work fine too, as well as older TCP protocols such as Telnet.

A request, and reply in CoAP look something like this:

Hex dump of request:
00000000  40 01 00 01 3b 65 78 61  6d 70 6c 65 2e 63 6f 6d   @...;exa mple.com
00000010  81 63 03 52 46 77 11 3c                            .c.RFw.< 

Hex dump of response:
    00000000  60 45 00 01 c1 3c ff a1  1a 00 01 11 70 a1 01 a3   `E...<.. ....p...
    00000010  04 18 64 02 6b 31 39 32  2e 31 36 38 2e 30 2e 31   ..d.k192 .168.0.1
    00000020  03 64 65 74 68 30                                  .deth0

Or in more human readable form:

Request:
Constrained Application Protocol, Confirmable, GET, MID:1
    01.. .... = Version: 1
    ..00 .... = Type: Confirmable (0)
    .... 0000 = Token Length: 0
    Code: GET (1)
    Message ID: 1
    Opt Name: #1: Uri-Host: example.com
        Opt Desc: Type 3, Critical, Unsafe
        0011 .... = Opt Delta: 3
        .... 1011 = Opt Length: 11
        Uri-Host: example.com
    Opt Name: #2: Uri-Path: c
        Opt Desc: Type 11, Critical, Unsafe
        1000 .... = Opt Delta: 8
        .... 0001 = Opt Length: 1
        Uri-Path: c
    Opt Name: #3: Uri-Path: RFw
        Opt Desc: Type 11, Critical, Unsafe
        0000 .... = Opt Delta: 0
        .... 0011 = Opt Length: 3
        Uri-Path: RFw
    Opt Name: #4: Content-Format: application/cbor
        Opt Desc: Type 12, Elective, Safe
        0001 .... = Opt Delta: 1
        .... 0001 = Opt Length: 1
        Content-type: application/cbor
    [Uri-Path: coap://example.com/c/RFw]

Response:
Constrained Application Protocol, Acknowledgement, 2.05 Content, MID:1
    01.. .... = Version: 1
    ..10 .... = Type: Acknowledgement (2)
    .... 0000 = Token Length: 0
    Code: 2.05 Content (69)
    Message ID: 1
    Opt Name: #1: Content-Format: application/cbor
        Opt Desc: Type 12, Elective, Safe
        1100 .... = Opt Delta: 12
        .... 0001 = Opt Length: 1
        Content-type: application/cbor
    End of options marker: 255
    Payload: Payload Content-Format: application/cbor, Length: 31
        Payload Desc: application/cbor
        [Payload Length: 31]
Concise Binary Object Representation
    Map: (1 entries)
        Unsigned Integer: 70000
            Map: (1 entries)
                ...0 0001 = Unsigned Integer: 1
                    Map: (3 entries)
                        ...0 0100 = Unsigned Integer: 4
                            Unsigned Integer: 100
                        ...0 0010 = Unsigned Integer: 2
                            Text String: 192.168.0.1
                        ...0 0011 = Unsigned Integer: 3
                            Text String: eth0

That there, also shows another tool to data packing: CBOR.  CBOR is basically binary JSON.  Just like JSON it is schemaless, it has objects, arrays, strings, booleans, nulls and numbers (CBOR differentiates between integers of various sizes and floats).  Unlike JSON, it is tight.  The CBOR blob in this response would look like this as JSON (in the most compact representation possible):

{70000:{4:100,2:"192.168.0.1",3:"eth0"}}

The entire exchange is 190 bytes, less than a quarter of the size of just the HTTP request alone.  I think that would work just fine over 1200 baud packet.  As a bonus, you can also multicast, try doing that with HTTP.

So you’d be writing higher-level services that would use this instead of JSON-REST interfaces.  There’s a growing number of libraries that can consume this sort of thing, and IoT is pushing that further.  I think it’s doable.

Now, on the routing front, I’ve been digging up a bit on Net/ROM.  Net/ROM is actually two parts, Net/ROM Level 3 does the routing and level 4 does the circuit switching.  It’s the “Level 3” bit we want.

Coming up with a definitive specification of the protocol has been a bit tough, it doesn’t help that there is a company called NetROM, but I did manage to find this document.  In a way, if I could make my software behave like a Net/ROM node, I could piggy-back off that to discover neighbours.  Thus this protocol would co-exist along side Net/ROM networks that may be completely oblivious to TCP/IP.

This is preferable to just re-inventing the wheel…yes I know non-circular wheels are so much fun!  Really, once Net/ROM L3 has figured out where everyone is, IP routing just becomes a matter of correctly addressing the AX.25 frame so the next hop receives the message.

VK4RZB at Mt. Coot-tha is one such node running TheNet.  Easy enough to do tests on as it’s a mere stone throw away from my home QTH.

There’s a little consideration to make about how to label the AX.25 frame.  Obviously, it’ll be a UI frame, but what PID field should I use?  My instinct suggests that I should just label it as “ARPA Internet Protocol”, since it is Internet Protocol traffic, just IPv6 instead of v4.  Not all the codes are taken though, 0xc9 is free, so I could be cheeky and use that instead.  If the idea takes off, we can talk with the TAPR then.

Oct 102018
 

This is another brain dump of ideas.

So, part of me wants to consider the idea of using amateur radio as a transmission mechanism for 6LoWPAN.  The idea being that we use NET/ROM and AX.25 or similar schemes as a transport mechanism for delivering shortened IPv6 packets.  Over this, we can use standard TCP/IP programming to write applications.

Protocols designed for low-bandwidth constrained networks are ideal here, so things like CoAP where emphasis is placed on compact representation.  6LoWPAN normally runs over IEEE 802.15.4 which has a payload limit of 128 bytes.  AX.25 has a limit of 256 bytes, so is already doing better.

The thinking is that I “encode” the call-sign into a “hardware” address.  MAC addresses are nominally 48-bits, although the IEEE is trying to phase that out in favour of 64-bit EUIs.  Officially the IEEE looks after this, so we want to avoid doing things that might clash with their system.

A EUI-48 (MAC) address is 6-bytes long, where the first 3 bytes identify the type of address and the organisation, and the latter 3 bytes identify an individual device.  The least significant two bits of the first byte are flags that decide whether the address is unicast or local, and whether it is globally administered (by the IEEE) or locally administered.

To avoid complications, we should probably keep the unicast bit cleared to indicate that these addresses are unicast addresses.

Some might argue that the ITU assigns prefixes to countries, and these countries have national bodies that hand out callsigns, thus we could consider callsigns as “globally administered”.  Truth is, the IEEE has nothing to do with the process, and could very legitimately assign the EUI-48 prefix 56-4b-34 to a company… in that hypothetical scenario, there goes all the addresses that might represent amateur operators stationed in Queensland.  So let’s call these “locally administered”, since there are suffixes the user may choose (e.g. “/P”).

That gives us 46-bits to play with.  7-bit ASCII just fits 6 characters, which would just fit the callsigns used in AX.25 with enough room for a 4-bit SSID.  We don’t need all 128 characters though, and a scheme based on DEC’s Radix50 can pack in far more.

We can get 8 arbitrary Radix50 characters into 43 bits, which gives us 3 left over which can be used as the user wishes.  We’ll probably call it the SSID, but unlike AX.25, will be limited from 0-7.  The user can always use the least significant character in their callsign field for an additional 6 bits, which gives them 9 bits to play with.  (i.e. “VK4MSL-1″#0 to encode the AX.25 SSID “VK4MSL-10”)

Flip the multicast bit, and we’ve got a group address.

SLAAC derives the IPv6 address from the EUI-48, so the IPv6 address will effectively encode the callsigns of the two communicating stations.  If both are on the same “mesh”, then we can probably borrow ideas from 6LoWPAN for shortening that address.

Sep 172018
 

Politicians and bureaucrats, aren’t they wonderful?  They create some of the laws that are the cornerstone of our civilisation.  We gain much stability in the world from their work.

Many are often well versed in law, and how the legal systems of the world, work.  They believe that their laws are above all overs.

So much so, they’ll even try to legislate the ratio of a circle’s circumference from its diameter.  Thankfully back then, others had better common sense.

They legislated for websites to display a banner on their pages that people have to click, telling the user that the website uses cookies for XYZ purpose.  Now, I have never set foot in Europe, I really don’t have any desire to leave Australia for that matter.  I am not a European citizen.  I do not use a VPN for accessing foreign websites: they see my Australian IP address.

In spite of this, now every website insists on pestering me about a law that is not in force here.  You know what?  You can disable cookies.  It is a feature of web browsers.  Even NCSA Mosaic, Netscape Navigator and the first versions of Internet Explorer (which were dead ringers for NCSA’s browser by the way), had this feature.  I’m talking mid-90s era browsers … and every descendent thereon.

It’d be far more effective for the browser to ask if XYZ site was allowed to set a cookie, but no, let’s foist this burden onto the website owner.  I don’t doubt people abuse this feature for various nefarious purposes, but a solution this is not!

It gets better though.  To quote the EFF (Today, Europe Lost The Internet. Now, We Fight Back):

Today, in a vote that split almost every major EU party, Members of the European Parliament adopted every terrible proposal in the new Copyright Directive and rejected every good one, setting the stage for mass, automated surveillance and arbitrary censorship of the internet: text messages like tweets and Facebook updates; photos; videos; audio; software code — any and all media that can be copyrighted.

Three proposals passed the European Parliament, each of them catastrophic for free expression, privacy, and the arts:

1. Article 13: the Copyright Filters. All but the smallest platforms will have to defensively adopt copyright filters that examine everything you post and censor anything judged to be a copyright infringement.

Yep, this is basically much like China’s Great Firewall, just outsourced.

It actually has me thinking about whether it is possible to detect if a given HTTP client is from the EU, and respond back with a HTTP error 451, because doing business in the EU is just too dangerous legally.

Aug 262018
 

So, a bit over 10 years ago, I made the Hat Lamp.  You can tell how long ago it was as it calls out Dick Smith part numbers for things like resistors.  (How long ago did they give that up?)

Anyway, the original still works, although my wiring is less than perfect.  I’ve thought about modernising it.  Back in 2007, addressable LEDs didn’t exist.  The project got by with nothing more than a 74HC14.  It used one gate as a pierce oscillator, a second to generate a 180° out-of-phase signal, and a third to perform “automatic” control based on the light that fell on a LDR mounted on the top of the hat.

I’ve thought about whether I modernise it.  I have access to 3D printing facilities at HSBNE, so destroying a hard hat is no longer of concern: the design I could come up with could be made to fit a hard hat without modification, meaning it would retain its safety standard qualifications.  LED technology has marched on, in 2007 a 1W LED was considered bright.  The Ay-up headlights I use when cycling are many times brighter than that.  These headlights do come with a headband accessory, which I have, but I find they’re a bit cumbersome.  They however work great on a hard hat or helmet.

That said, for WICEN activities, they’re often too bright, even on their lowest power setting.

MCUs are also cheaper today than they used to be.  And we have addressable LEDs.  Meaning that I could have the old alternating red “alien-abduction head first” pattern the old one, or any number of patterns to suit the occasion.

That said, I’m a little concerned about APA Electronic throwing their weight around.  I actually was considering the APA102s, as they use a SPI style interface which is less timing-sensitive than World Semi’s WS2812s, but really, I hadn’t made a firm choice.  Then, Pimoroni got that letter.  I have no idea whether that letter was (1) a hoax (as in, not actually sent from APA Electronic), (2) the matter settled or (3) the matter still in progress.

In any case, the patent referenced talks about synchronous interfaces.  One common gripe with the WS2812s is that the interface relies on strict timing, which is harder to do with higher-level MCUs and CPUs.  Arduinos can work them fine, but as it’s a somewhat custom serial link, you’ve got to be able to bit-bang it via GPIOs and not all systems are good at that.  Using SPI avoids that problem at the cost of an extra wire.  I wondered if there was another way.

This is what I came up with as a concept.  UARTs idle “high” when not transmitting, so the TX line can serve as a pull-up resistance when the master is not sending anything.  Some MCUs can also re-map their pins (e.g. NXP LPC81x, TI CC2538, Nordic NRF52840), others natively support half-duplex UART Rx/Tx on a single pin (e.g. Microchip ATTiny202).

That allows us to have a shared “bus” with 3 wires between each module: VDD, Data and VSS… the same as the WS2812s.  Unlike the WS2812s, this bus would be built on the UART standard, thus less sensitive to timing jitter.

The problem then is, how do you address each LED?  The WS2812 and APA102s solve this by making the whole bus function as one big shift register.  This makes the electronics simple, but has the cost that you can only communicate one way, from MCU to LED controller.  Thus, you have to maintain a framebuffer, and if you want to just change the colour of one LED, you’ve got to shift the entire framebuffer out.

Why can’t the LEDs have more brains?  How hard is it to DIY a LED controller?

The above arrangement uses the concept of “upstream” and “downstream” ports.  A bi-lateral switch is used to disconnect the “downstream” port under the control of the LED firmware.  If each slave on power up waited for some initialisation signal from the master, all could then disconnect their downstream ports, which then means the next command would only be received by the first LED module.

On telling it its assigned address, it could connect its downstream neighbour and you’d then be able to talk to those two LEDs, one of which would be at some unknown address, and the other, assigned.

This would repeat until you got to the end of the chain.  Given that the downstream LED can “hear” its upstream neighbour when it is connected, it’s not hard for the downstream LED to assume its address is one after its upstream neighbour.

Disconnection would be achieved via some sort of tri-state bi-directional buffer such as a bilateral switch.  I’ve used 4066s in the diagram, but in reality, I’d be using a single-unit version like a SN74LVC1G66.

It’s also common to consider these as matrices.  It’d be really neat, if you say had an array of say 320×240 pixels that the LEDs should use 4-byte addressing, where the lowest 9 bits was the X co-ordinate and the upper bits the Y co-ordinate.  Thus the addressing would count from 0x0000000 to 0x0000013f, then skip to 0x0000200.  That’s a simple arithmetic operation.  By connecting the next neighbour then announcing the address, this could trigger the neighbour to do the same automatically.  The master would then hear each and every pixel announce its address as it comes online.  When the messages stop, initialisation is complete.

A major problem with asynchronous communications is figuring out the baud rate.  Luckily, LIN has solved that problem.  No, I won’t actually use the LIN protocols, I’ll just use its sync frame, support for which is built into many MCU UART modules.  LIN uses a header which includes a BREAK followed by the 0x55 byte which helps the slave figure out the correct baud rate being used.  If I use that same sequence, I get autobauding for free.

So putting this together, how would this work?  Let’s assume everything has just been reset.  Each protocol frame would be bounded by a header and a trailer, the header being based on the LIN standard.  Not sure what the trailer will look like at this point, maybe a CRC.

  1. On power-on, the slaves all link their downstream ports.  (Thus if a controller crashes, you lose just that one pixel.  It also allows all slaves to receive the initial configuration commands.)
  2. The master then tells the slaves to commence a roll-call.  The instruction would be made up of:
    1. The “commence roll-call” op-code (1 byte)
    2. The length in bytes of the addresses to be used (1 byte; call its value L)
    3. The number of bits in the address representing a single row (1 byte; call this D)
    4. The number of pixels per row (L bytes, call this M)
  3. The slaves immediately disconnect their downstream ports then respond back with
    1. The “OK” op-code
  4. Since the head of the line would have disconnected every other slave, the master only hears one “OK” response.  The master performs the following computation:
    • Address = (2^(8L)) – 1
  5. The master starts the roll-call off by sending the following on the bus.
    1. The “Address announcement” op-code (1 byte)
    2. The address it calculated (L bytes)
  6. Since just the first slave is connected, it hears this.  It connects its downstream neighbour, then with the received address, it performs the following algorithm:
    • Address = UpstreamAddress + 1
    • If (Address & ((2^D)-1) > M:
      • Address = ((Address >> D) + 1) << D
  7. With its new address, and the immediate neighbour connected, it sends
    1. The “Address announcement” op-code (1 byte)
    2. The address it calculated (L bytes)

Ad infinite um, until the last in the chain announces its address.

The master of course hears all, including its own traffic.  As an example, if we considered a 320×200 pixel panel with 32-bit addressing; thus L=4, D=9 and M=320, it would hear this:

  • HEADER OP_ROLL_CALL L=4 D=9 M=320 TRAILER: Begin roll-call
  • HEADER RES_OK TRAILER: Slaves ready for roll-call
  • HEADER OP_ADDR_ANN ADDR=0xffffffff TRAILER: Master “my address is 0xffffffff”
  • HEADER OP_ADDR_ANN ADDR=0x00000000 TRAILER: First pixel “my address is 0x00000000”
  • HEADER OP_ADDR_ANN ADDR=0x00000001 TRAILER: Second pixel “my address is 0x00000001”
  • etc
  • HEADER OP_ADDR_ANN ADDR=0x0000013f TRAILER: 320th pixel “my address is 0x0000013f”
  • HEADER OP_ADDR_ANN ADDR=0x00000200 TRAILER: 321st pixel “my address is 0x00000200”
  • etc

At the end, everybody knows their address, including the master (which is derived from the address length; its address is “all ones”), and because each slave connected its neighbour, everyone can communicate together.

Operation codes could be implemented that allow a pixel to be set to a given colour, or to report its present colour.  Since they all know how to interpret the address to form co-ordinates, it’s possible for the master to send a command that says “fill rectangle (X1,Y1)-(X2,Y2) with colour C”, all pixels hear this simultaneously and the action is performed.

Or better yet, “pixels in area (X1,Y1)-(X2,Y2), copy the colour from the neighbour to your right”, which would allow for scrolling text displays.  The pixels would know immediately who to ask, and could have an “agreed upon” order in which to perform operations.  Thus (X1,Y1) would know to ask (X1+1,Y1) for its colour, copy that to its own output, then tell (X1,Y1+1) to perform its step.  (X1,Y2) would know that once it copied its colour from (X1+1,Y2), it needs to poke (X1+1,Y1).  Finally (X2,Y2) would know to tell the master that the operation is complete.

Blitting can also be done.  You know the operation, you know how to obtain the input data, the rest can be done independent of the master MCU.

The microcontrollers don’t need a lot of brains to do this, nor does the master for that matter, it’s distributed brains that get the job done.  The part I’m thinking of for this is the Microchip ATTiny202, which can be bought for under 60c a piece and features hardware UART and up to 4 PWM channels.

For sure, add in the bilateral switch, some passives, a PCB and a RGB LED and you’ve blown more money than the competition, but in this case, you’ve got a fully programmable LED controller with open-source firmware, that’s not patent encumbered.

It might be a little while before this MCU is available, Mouser reckon they’ll have them late October, which is fine I can wait.  Until then, plenty of time to research the problem.

Jul 232018
 

Lately, I’ve been doing a lot of development work on Tridium Niagara kit.  The Tridium platform is fundamentally built on Sun^WOracle’s Java environment, and is very popular in the building management industry.  There’s an estimate of over 600000 JACE devices (building management controllers) deployed worldwide, so I can fully understand why my workplace is chasing them.

That means coming to grips with their environment, and getting it to talk to ours.  Officially, VRT is a Debian/Ubuntu shop.  They used to dabble with Red Hat years ago, back when VRT and Red Hat were next-door neighbours (in Gardner Close, Milton) but VRT switched to Ubuntu around 2008 after a brief flirt with Gentoo.  Thus, must of our tooling assumes a Debian-based system.

Docker CE on Debian and Ubuntu is a snap.  However, Tridium it would seem, are Red Hat fans, and only support their development environment on Microsoft Windows (yes shudder) or Red Hat Enterprise Linux.  Thus, we have a RHEL 7.3 VM we pass around when we’re doing VM development.  I figured since we’re trying to link Niagara to WideSky, it would be nice to be able to deploy WideSky on RHEL.

WideSky uses Docker as the basis for its deployment, so this sounded simple enough.  Install Docker and docker-compose, throw a bog-standard deployment in there, docker-compose up -d, off we go.

Not so fast.

While there’s Docker EE for RHEL, budget is tight and we really don’t need the support as this isn’t a “production” instance as such.  If the VM gets sick we just roll it back to a known good version and continue from there.  It doesn’t make sense to spend money on purchasing Docker EE.  There’s a CentOS version of Docker CE, and even unofficial instructions on how to shoehorn this into RHEL.  I dutifully followed these, but then hit a road-block with container-selinux: the repository no longer has that version.

Rather than looking for what version they have now, or play Russian Roulette hunting for a random RPM from some mirror site (been there, done that many moons ago before I knew better)… a better plan was to grab the sources and sic rpmbuild onto them so we get a RHEL-native binary.

Building container-selinux on RHEL

  1. Begin by installing dependencies:
    # yum install -y selinux-policy selinux-policy-devel rpm-build rpm-devel git
  2. Download the sources for the RPM:
    $ git clone https://git.centos.org/r/rpms/container-selinux.git
    $ git checkout c7-alt
    $ cd SPECS
  3. Have a look at the .spec file to see where it expects to source the sources from, up the top of the file I downloaded, I saw:
    %global git0 https://github.com/projectatomic/%{name}
    %global commit0 dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  4. Fetch the sources, then check out that commit:
    $ git clone https://github.com/projectatomic/container-selinux
    $ git checkout dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  5. Rename the check-out directory as container-selinux-${GIT_COMMIT_ID}
    $ cd ..
    $ mv container-selinux container-selinux-dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  6. Package it up into a tarball, excluding the .git directory and plop that file in ~/rpmbuild/SOURCES
    $ tar --exclude container-selinux-dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca/.git \
    -czvf ~/rpmbuild/sources/container-selinux-dfb449b.tar.gz \
    container-selinux-dfb449b771ca4977bb7d5fb6cd7be3cfc14d6fca
  7. Build!
    $ rpmbuild -ba container-selinux.spec

All going to plan, you should have a shiny new RPM file in ~/rpmbuild/RPMS.  Install that, then you can proceed with installing the CentOS version of Docker CE.  If you’re doing this for a production environment, and absolutely must use Docker CE, then I’d advise that perhaps taking the source RPMs for Docker CE and building those on RHEL would be advisable over using raw CentOS binaries, but each to your own.

# docker info
Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 0
Server Version: 18.06.0-ce
Storage Driver: overlay2
 Backing Filesystem: xfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 seccomp
  Profile: default
Kernel Version: 3.10.0-693.11.1.el7.x86_64
Operating System: Red Hat Enterprise Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.702GiB
Name: localhost.localdomain
ID: YVHJ:UXQV:TBAS:E5MH:B4GL:VT2H:A2BW:MQMF:3AGA:FBBX:MINO:24Z6
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false
Jul 222018
 

So, on the bike, I use a portable GPS to keep track of my speed and to track the mileage done on the bike so I know when to next put it in for service. Originally I just relied on the trip counter in the GPS, but then found that this could develop quite an error if left to tick over for a few months.

Thus, I wrote a simple CGI application in Perl and SQLite3 that would track the odometer readings. Plain, simple, and it’s worked quite well, but remembering to punch in the current odometer reading is a chore, and my stats are only as granular as I submit: if I want to see what distance I did on a particular day, I either have to have had the foresight to store readings at the start and end of that day, or I’m stuffed.

I also keep the GPX tracklogs. While the Garmin 650 is not great at handling lots of tracklogs (and for some moronic reason, they name the files “Day DD-MMM-YY HH.MM.SS.gpx”, not something sensible like “YYYY-MM-DDTHH-MM-SS.gpx”), it’s good enough that I can periodically siphon off the track logs for storage on my laptop. I then have a record of where I’ve been.

Theoretically, this also has the distance travelled, I could make a service that just consumes the GPX files, and tallies up the distances that way. Maybe even visualise heat-map style, where I go most. (No prizes for guessing “work” … but where else?)

The existing system uses SQLite, and specifically, its views, as poor man’s stored procedures. It’s hacky, inefficient, and sooner or later I’ll have performance problems. PostGIS is an extension onto PostgreSQL which supports a large number of spatial operations, including finding the length of a series of points, which is exactly the problem I’m trying to solve right now. The catch is, how do you import the data?

Enter GDAL

GDAL is a library of geographic functions for answering these kinds of questions. It ships with a utility ogr2ogr, which can take geographic information in a variety of formats, and convert to a variety of output formats. Crucially, this tool supports consuming GPX files and writing to a PostGIS database.

Loading one file, is easy enough:

$ ogr2ogr -oo GPX_ELE_AS_25D=YES \
  -dim 3 \
  -gt 65536 \
  -lco GEOM_TYPE=geography \
  -preserve_fid \
  -f PostgreSQL \
  "PG:dbname=yourdb" yourfile.gpx \
  tracks track_points

The arguments here were found by trial-and-error.  Specifically, -oo GPX_ELE_AS_25D=YES and -dim 3 tell ogr2ogr to preserve the elevation in the point information (as well as keeping a copy of it in the ele column). -lco GEOM_TYPE=geography tells ogr2ogr to use the geography data type in PostGIS.

Look in the database, and you’ll see two tables, tracks and track_points. Sadly, you don’t get to choose the names of these (not easily anyway, there is -nln, but it then will create one table with the given name, put the tracks in it, then blow it away and replace it with a table of the same name containing points), and there’s no foreign keys between the two.

The fun starts when you try to import a second GPX file. Run that command again, and because of -preserve_fid, you’ll get a primary key clash. Take that away, and the track_fid column in track_points becomes meaningless.

If you drop -preserve_fid, then track_fid gets set to 0 for all points.  Useless.

Importing many GPX files

Out of the box, this just wasn’t going to fly, so we needed to do things a little different.  Firstly, I duplicated the schema that GDAL creates, creating my own tables which will ultimately store the data.  I then used a wrapper shell script that calls psql before and after ogr2ogr so I can re-map the primary keys to maintain relationships.

Schema SQL

CREATE SEQUENCE public.gpx_points_ogc_fid_seq
    INCREMENT 1
    START 1
    MINVALUE 1
    MAXVALUE 2147483647
    CACHE 1;

CREATE SEQUENCE public.gpx_tracks_ogc_fid_seq
    INCREMENT 1
    START 1
    MINVALUE 1
    MAXVALUE 2147483647
    CACHE 1;

CREATE TABLE public.gpx_tracks
(
    ogc_fid integer NOT NULL,
    name character varying COLLATE pg_catalog."default",
    cmt character varying COLLATE pg_catalog."default",
    "desc" character varying COLLATE pg_catalog."default",
    src character varying COLLATE pg_catalog."default",
    link1_href character varying COLLATE pg_catalog."default",
    link1_text character varying COLLATE pg_catalog."default",
    link1_type character varying COLLATE pg_catalog."default",
    link2_href character varying COLLATE pg_catalog."default",
    link2_text character varying COLLATE pg_catalog."default",
    link2_type character varying COLLATE pg_catalog."default",
    "number" integer,
    type character varying COLLATE pg_catalog."default",
    gpxx_trackextension character varying COLLATE pg_catalog."default",
    the_geog geography(MultiLineStringZ,4326),
    CONSTRAINT gpx_tracks_pkey PRIMARY KEY (ogc_fid)
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default;

CREATE TABLE public.gpx_points
(
    ogc_fid integer NOT NULL,
    track_fid integer,
    track_seg_id integer,
    track_seg_point_id integer,
    ele double precision,
    "time" timestamp with time zone,
    magvar double precision,
    geoidheight double precision,
    name character varying COLLATE pg_catalog."default",
    cmt character varying COLLATE pg_catalog."default",
    "desc" character varying COLLATE pg_catalog."default",
    src character varying COLLATE pg_catalog."default",
    link1_href character varying COLLATE pg_catalog."default",
    link1_text character varying COLLATE pg_catalog."default",
    link1_type character varying COLLATE pg_catalog."default",
    link2_href character varying COLLATE pg_catalog."default",
    link2_text character varying COLLATE pg_catalog."default",
    link2_type character varying COLLATE pg_catalog."default",
    sym character varying COLLATE pg_catalog."default",
    type character varying COLLATE pg_catalog."default",
    fix character varying COLLATE pg_catalog."default",
    sat integer,
    hdop double precision,
    vdop double precision,
    pdop double precision,
    ageofdgpsdata double precision,
    dgpsid integer,
    the_geog geography(PointZ,4326),
    CONSTRAINT gpx_points_pkey PRIMARY KEY (ogc_fid),
    CONSTRAINT gpx_points_track_fid_fkey FOREIGN KEY (track_fid)
        REFERENCES public.gpx_tracks (ogc_fid) MATCH SIMPLE
        ON UPDATE RESTRICT
        ON DELETE RESTRICT
)
WITH (
    OIDS = FALSE
)
TABLESPACE pg_default;

The wrapper script

 1 #!/bin/sh
 2 
 3 DB=tracklog
 4 
 5 for f in "$@"; do
 6         psql tracklog <<EOF
 7 DROP TABLE IF EXISTS tracks;
 8 DROP TABLE IF EXISTS track_points;
 9 EOF
10         ogr2ogr -oo GPX_ELE_AS_25D=YES \
11                 -dim 3 \
12                 -gt 65536 \
13                 -lco SPATIAL_INDEX=FALSE \
14                 -lco GEOM_TYPE=geography \
15                 -overwrite \
16                 -preserve_fid \
17                 -f PostgreSQL \
18                 "PG:dbname=${DB}" "$f" \
19                 tracks track_points
20 
21         # Re-map FIDs then insert into real tables.
22         psql tracklog <<EOF
23         CREATE TEMPORARY TABLE track_fids AS
24         SELECT  ogc_fid AS orig_fid,
25                 nextval('gpx_tracks_ogc_fid_seq') AS ogc_fid
26         FROM    tracks;
27 
28         CREATE TEMPORARY TABLE point_fids AS
29         SELECT  ogc_fid AS orig_fid,
30                 nextval('gpx_points_ogc_fid_seq') AS ogc_fid
31         FROM    track_points;
32 
33         INSERT INTO gpx_tracks
34         SELECT  track_fids.ogc_fid AS ogc_fid,
35                 tracks.name as name,
36                 tracks.cmt as cmt,
37                 tracks."desc" as "desc",
38                 tracks.src as src,
39                 tracks.link1_href as link1_href,
40                 tracks.link1_text as link1_text,
41                 tracks.link1_type as link1_type,
42                 tracks.link2_href as link2_href,
43                 tracks.link2_text as link2_text,
44                 tracks.link2_type as link2_type,
45                 tracks."number" as "number",
46                 tracks.type as type,
47                 tracks.gpxx_trackextension as gpxx_trackextension,
48                 tracks.the_geog as the_geog
49         FROM    track_fids, tracks
50         WHERE   track_fids.orig_fid=tracks.ogc_fid;
51 
52         INSERT INTO gpx_points
53         SELECT  point_fids.ogc_fid AS ogc_fid,
54                 track_fids.ogc_fid AS track_fid,
55                 track_points.track_seg_id AS track_seg_id,
56                 track_points.track_seg_point_id AS track_seg_point_id,
57                 track_points.ele AS ele,
58                 track_points."time" AS "time",
59                 track_points.magvar AS magvar,
60                 track_points.geoidheight AS geoidheight,
61                 track_points.name AS name,
62                 track_points.cmt AS cmt,
63                 track_points."desc" AS "desc",
64                 track_points.src AS src,
65                 track_points.link1_href AS link1_href,
66                 track_points.link1_text AS link1_text,
67                 track_points.link1_type AS link1_type,
68                 track_points.link2_href AS link2_href,
69                 track_points.link2_text AS link2_text,
70                 track_points.link2_type AS link2_type,
71                 track_points.sym AS sym,
72                 track_points.type AS type,
73                 track_points.fix AS fix,
74                 track_points.sat AS sat,
75                 track_points.hdop AS hdop,
76                 track_points.vdop AS vdop,
77                 track_points.pdop AS pdop,
78                 track_points.ageofdgpsdata AS ageofdgpsdata,
79                 track_points.dgpsid AS dgpsid,
80                 track_points.the_geog AS the_geog
81         FROM    track_points, track_fids, point_fids
82         WHERE   point_fids.orig_fid=track_points.ogc_fid
83         AND     track_fids.orig_fid=track_points.track_fid;
84 
85         DROP TABLE tracks;
86         DROP TABLE track_points;
87         DROP TABLE track_fids;
88         DROP TABLE point_fids;
89 EOF
90 done

Getting the length of a track

Having imported all the data, we can do something like this:

SELECT ogc_fid, name,
  ST_Length(the_geog, false)/1000 as dist_in_km
FROM gpx_tracks order by ogc_fid desc limit 10;

and get this:

1754 Day 20-JUL-18 18:09:02′ 9.83689686312541′
1753 Day 15-JUL-18 09:36:16′ 5.75919119415676′
1752 Day 14-JUL-18 17:12:24′ 0.071734341651265′
1751 Day 14-JUL-18 17:12:23′ 0.0729574875289383′
1750 Day 13-JUL-18 08:13:32′ 9.88420745610283′
1749 Day 06-JUL-18 09:00:32′ 9.81221316219109′
1748 Day 30-JUN-18 01:11:26′ 9.77607205972035′
1747 Day 23-JUN-18 05:02:04′ 19.6368592034475′
1746 Day 22-JUN-18 18:03:37′ 9.91964760346248′
1745 Day 12-JUN-18 21:22:26′ 0.0884092391531763′

Visualisation with QGIS

Turns out, this is straightforward…

  1. In your workspace, there’s a tree with the different layer types you can add, including PostGIS… right-click on this and select New Connection… fill in the details for your PostgreSQL database.
  2. Below that is XYZ Tiles…, right click again, select New Connection for OpenStreetMap, and use the URL https://a.tile.openstreetmap.org/{z}/{x}/{y}.png (also, see their policy).
  3. Drag the OpenStreetMap connection to your layers
  4. Expand the PostGIS connection you just made, and look for the gpx_tracks table, drag this on top of your OpenStreetMap layer.

Below is everywhere I’ve been with the GPS tracklog running.  Much of what you see is the big loop a few of us did in 2012, including my trip to Ballarat for the 2012 LCA.

If I zoom in on Brisbane, unsurprisingly, some areas show up very clearly as being common haunts for me:

A bit of SQL voodoo, and I come up with this:

In orange is the territory covered on the Boulder (minus what was covered before I got the GPS), in blue the territory covered on the Talon 29 ER 0, and in red, on my current commuter (Toughroad SLR2).

Jul 162018
 

So, the local media here (can’t comment for other parts of the world) have been quite busy reporting on the fate of The Wild Boars soccer team and their coach, stuck in a flooded cave in Thailand.  With the great work of many, the group is now free of the cave, and getting the medical attention they need.

Pats on the back all around.  It could have very well been a dozen funerals that needed to be organised instead of servings of various meals.

Overshadowing this somewhat, has been the somewhat childish spat between Vern Unsworth and Elon Musk over the miniature submarine that was proposed as a vehicle for transporting the children through the cave system.

Now, I’ll admit right up front, what I know is what I’ve heard from the media here.  In amongst the reports, it was commented that the gaps though which people had to squeeze through, were as small as 38cm in places.

That does not leave you much room.  That’s bloody confined in the extreme.  A submarine that could fit a child and squeeze thorough such a gap?  It’d be positively claustrophobic!

Now, Mr Unsworth did label this as a PR stunt.  Maybe it was … maybe the design was just naïve.  I think the goal was a noble one, and Elon Musk’s team did a great job in giving it a go, even if they did overlook a few critical details.

However, I think I’ll take Mr Unsworth’s advice over Mr Musk’s regarding whether the device was practical, as he was actually there.  If the device got stuck, the results could have been fatal.  The team was already in a dangerous situation and had lost one member of their team already, they really weren’t in a position to experiment.  I think responding with “stick it where it hurts” is being overly harsh, but otherwise I think the criticism was entirely valid.

You do not, however, call someone a “pedo”, without very good grounds for doing so.  That is slanderous.  And what exactly is “sus” about living in Thailand?  Tesla’s been suffering some quite bad press lately, I really do not think this juvenile behaviour helps anyone.

One is free to believe that ego is not a dirty word, but that does not mean one’s humility should be locked under the stairs!


Update 2018-07-17: Hmm, I was saying…? Tesla sheds almost $US2b after Elon Musk’s ‘pedo’ attack on British diver.

Jun 142018
 

So, last Sunday we did a trip up the Brisbane Valley to do a rekkie for the Yarraman to Wulkuraka bike ride that Brisbane WICEN will be assisting in at the end of next month.

The area is known to be quite patchy where phone reception is concerned, with Linville shown to be highly unreliable… Telstra recommends external antennas are required to get any sort of service.  So it seemed a good place to take the Kite and try it out in a weak signal area.

3G coverage in Linville, with external antenna.

4G coverage in Linville, with external antenna.

4GX coverage in Linville, with external antenna.

Sadly, I didn’t get as much time as I would have liked to perform these tests, and it would have been great to compare against a few others… but I was able to take some screenshots on the way up of the three phones, all on the same network (Telstra), using their internal antennas (and the small whip in the case of the Kite).  However, we got there in the afternoon, and there were clouds gathering, so we had to get to Moore.

In any case, Telstra seems to have pulled their socks up since those maps were updated… as I found I was getting reasonable coverage on the T83.  The Kite was in the car at the time, I didn’t want it getting damaged if I came off the bike or if the heavens opened up.

I did manage to take some screenshots on the three phones on the way up.

This is not that scientific, and a bit crude since I couldn’t take the screenshots at exactly the same moment.  Plus, we were travelling at 100km/hr for much of the run.  There was one point where we stopped for breakfast at Fernvale, I can’t recall exactly what time that was or whether I got a screenshot from all three phones at that time.

The T84 is the only phone out of the three that can do the 4GX 700MHz band.

Time ZTE T83 ZTE T84 iSquare Mobility Kite v1 Notes
2018-06-10T06:08:16 t83 at 2018-06-10T06:08:16 Leaving Brisbane
2018-06-10T06:09:24 kite at 2018-06-10T06:09:24
2018-06-10T06:09:33 t83 at 2018-06-10T06:09:33
2018-06-10T06:26:17 t83 at 2018-06-10T06:26:17
2018-06-10T06:26:25 kite at 2018-06-10T06:26:25
2018-06-10T07:30:27 t84 at 2018-06-10T07:30:27 A rare moment where the T84 beats the others.  My guess is this is a 4GX (700MHz) cell.
2018-06-10T07:30:34 kite at 2018-06-10T07:30:34
2018-06-10T07:30:39 t83 at 2018-06-10T07:30:39
2018-06-10T07:41:48 kite at 2018-06-10T07:41:48
2018-06-10T07:41:54 t84 at 2018-06-10T07:41:54 HSPA coverage… one of the few times we see the T84 drop back to 3G.
2018-06-10T07:42:01 t83 at 2018-06-10T07:42:01
2018-06-10T07:51:34 t83 at 2018-06-10T07:51:34 Patchy coverage at times en route to Moore.
2018-06-10T07:51:45 kite at 2018-06-10T07:51:45
2018-06-10T08:24:57 kite at 2018-06-10T08:24:57 For grins, trying out Optus coverage on the Kite at Moore.  There’s a tower at Benarkin, not sure if there’s one closer to Moore.
2018-06-10T08:25:39 kite at 2018-06-10T08:25:39
2018-06-10T08:54:28 t84 at 2018-06-10T08:54:28
2018-06-10T08:54:35 kite at 2018-06-10T08:54:35 En route to Benarkin, we lose contact with Telstra on all three devices.
2018-06-10T08:54:39 t83 at 2018-06-10T08:54:39
2018-06-10T09:35:14 kite at 2018-06-10T09:35:14 In Benarkin.
2018-06-10T09:35:22 t83 at 2018-06-10T09:35:22
2018-06-10T10:25:27 kite at 2018-06-10T10:25:27
2018-06-10T10:25:48 t83 at 2018-06-10T10:25:48

So what does the above show?  Well, for starters, it is apparent that the T83 gets left in the dust by both devices.  This is interesting as my T83 definitely was the more reliable on our last trip into the Snowy Mountains, regularly getting a signal in places where the T84 failed.

Two spots I’d love to take the Kite would be Dumboy Creek (4km outside Delungra on the Gwydir Highway) and Sawpit Creek (just outside Jindabyne), but both are a bit far for a day trip!  It’s unlikely I’ll be venturing that far south again this year.

On this trip up the Brisbane Valley though, I observed that when the signal got weak, the Kite was more willing to drop back to 3G, whereas the two ZTE phones hung onto that little scrap of 4G.  Yes, 4G might give clearer call quality and faster speeds in ideal conditions, but these conditions are not ideal, we’re in fringe coverage.

The 4G standards use much more dense forms of modulation (QPSK, 16-QAM or 64-QAM) than 3G (QPSK only) trading off spectral efficiency for signal-to-noise performance, thus lean more heavily on forward error correction to achieve communications in adverse conditions.  When a symbol is corrupted, more data is lost with these standards.  3G might be slower, but sometimes slow and steady wins the race, fast and flaky is a recipe of frustration.

A more scientific experiment, where we are stationary, and can let each device “settle” before taking a reading, would be worthwhile.  Without a doubt, the Kite runs rings around the T83.  The T84 is less clear: the T84 and the Kite both run the same chipset; the Qualcomm MSM8916.  The T83 runs the older MSM8930.

By rights, the T84 and Kite should perform nearly identical, with the Kite having the advantage of a high-gain whip antenna instead of a more conventional patch panel antenna.  The only edge the T84 has, is the 700MHz band, which isn’t that heavily deployed here in Australia right now.

The T83 and T84 can take an external antenna, but the socket is designed for cradle use and isn’t as rugged or durable as the SMA connector used on the Kite.  It’s soldered to the PCB, and when a cable is plugged in, it disconnects the internal antenna.

Thus damage to this connector can render these phones useless.  The SMA connector on the Kite however is a pigtail to an IPX socket inside … a readily available off-the-shelf (mail-order) part.  People may not like the whip sticking out though.

The Kite does ship with a patch antenna, which is about 75% efficient; so maybe 0dBi at best, however I think making the case another 10mm longer and incorporating the whip into the top of the phone so the antenna can tuck away when not needed, is a better plan.  It would not be hard to make the case accommodate it so it’s invisible and can fold out, or be replaced with a coax connection to an external antenna.

If there’s time, I’ll try to get some more conclusive tests done, but there’s no guarantees on that.