A little while back I purchased Xeltek’s SuperPro 610P Universal programmer.
It has the odd quirk, but overall it’s done the job. There is one thing however that has always irritated me about this product – This damn thing:
Every time you start their application, or change device, you are prompted with this absolutely f–king useless dialogue, having to dismiss it every time, worse still, it has no OK or Close button. Even more annoying, there is no option to disable the displaying of it in the first place.
Hell, even if there was any useful information on it, that doesn’t mean I want to see it every single time I use the SuperPro!!!
I contacted Xeltek’s customer support about that, they had me go to the trouble of sending my invoice and serial number to them to prove that I in fact had actually paid them a sum of money, and then promptly did absolutely nothing about it, other than tell me that it could not be disabled.
Despite how simple it would be to even change the software to provide an option to disable it, repeated requests to do so were ignored.
Righty. Time to do something about this. 30 minutes behind IDA later we’re onto it. Quickly I can see it is written on the very same tech I cut my own teeth on: Microsoft Foundation Classes (MFC).
Given this, it’s pretty likely that we’ll see a call to _AfxPostInitDialog() at some point during the displaying of a dialog.
Let’s put a breakpoint in there, and bingo! Hop back up the stack a little, and there I find the offending instruction:
The highlighted instruction is in code written by Xeltek, and calls a function which displays that dialogue both when the application starts and when the device type is changed, but not when the “Dev. Info” button is pressed (in the unlikely event I actually want to see that bloody useless dialogue).
So all that needs to be done is remove it.
In the current version at the time of writing (the version dated 07/21/2016) that instruction (opcode 0xE8) and its 4 byte operand is physically located at 0x3373F in SP6100.exe. Replace it with 5 NOP (0x90) instructions, and we’re good.
Now that dialogue is only displayed when the “Dev. Info” button is clicked, which is all I ever wanted to begin with.
Feel free to contact me if you want the patched EXE!
A number of years ago I came across an eSATA cable system known as eSATApd (5V/12V) DeLock was the first vendor I am aware of which sold these products. The key feature with this system is that it carries +12V with no mandated current limit. This makes it possible to power 3.5″ external hard disks from a PC without needing that pesky power brick.
Not DeLock or any other vendor produce enclosures that make use of this system, but that’s fine. They do sell the eSATApd connector and I’ve been modifying (in some cases, their own) enclosures for years to accept eSATApd power input.
Recently I upped the ante by modifying a 2 bay RAID enclosure (using RAID 0) to accept eSATApd so I could power the entire enclosure from the PC. As you can typically get 12V/3A across these cables this should not have been a problem.
Except now I needed 6Gb/s SATA in order to get the benefits from the increased performance of the RAID 0 array. Suddenly, I’ve got a bit of a problem: These cables do not work at 6Gb/s.
This wasn’t entirely surprising to me. The specification for SATA is pretty clear about cables: The correct cable is a distinctive 100Ω impedance, flat twinaxial cable, whereas the DeLock/LINDY cable is a fairly thin and flexible round cable.
Another point that SATA-IO are clear about, is that there is no such thing as a 6Gb/s SATA cable. Cables that were properly designed for the original 1.5Gb/s interface should work just fine for 6Gb/s.
Notwithstanding this, I’m already suspicious about the construction of these cables. Let’s take a look at this one:
As the DeLock cables I purchased are now pretty much useless to me, Let’s cut one open and see what’s in there:
Well that clearly isn’t a very SATA looking cable. What appears to be in here is a couple of foil shielded PVC coated pairs of the same kind of construction that would be used in an HDMI cable.
Allow me to get out my Paint-fu to draw a little diagram of the two styles of cable:
That’s a pretty significant difference in design.
But despair not (?) DeLock seem now to be selling a newer version of this cable, which I’ve got a couple of. It’s a lot bulkier, with two fairly rigid cores bonded together. Perhaps this newer cable works at 6GB/s? Why else would they change the design. The old design, being thinner and several times more flexible, was a lot nicer to use.
Nope. This cable also doesn’t work at 6Gb/s. The system in most cases can’t detect the drive, and when it does detect it, file transfers frequently fail.
So now I’ve got another useless eSATA cable. Let’s cut this one open and see what’s going on:
From what I can see, the core on the right is a “Powered USB” cable, this is typically used in conjunction with a specially designed connector for Retail/POS terminals which have a higher power requirement. This cable carries the +12V, +5V and USB 2.0, and is the correct type of cable for the USB half of this application.
The cable on the left is the one of interest as it carries the SATA signals. It appears to be of exactly the same construction as the previous edition of the cable – two foil shielded PVC coated pairs.
Whatever the reason these cables don’t work at 6Gb/s, they both have the same problem.
Success at last
After hours of frustration – I found a very interesting looking cable on the U.S. Amazon site. Sold by “Micro SATA Cables“, it’s the first I’ve ever seen which uses proper SATA cable, bonded to a power cable. This is what I was looking for. Fortunately Amazon U.S. ships internationally, a week later I got a couple to try out.
They work! Reliably sustaining the ~350MB/s my 2x6TB RAID 0 enclosure is capable of, and clearly surpassing the ~225MB/s limit of 3.0Gb/s SATA.
I don’t need to cut this cable open to know that it’s correctly designed. Aside from it actually working, the data wire is clearly labelled “Serial ATA”, and it also physically looks like SATA twinaxial cable.
Recently while staying with the folks in New Zealand, I read that (their) consumer focused ISP – 2Degrees (Formerly Snap Internet) is actually offering IPv6 connectivity to customers, no strings attached!
Although not news, this is a pretty significant development for the New Zealand Internet Service Provider market, with almost every other provider very much heads in the sand on the matter.
Being a nation with a small population and in possession of a fairly reasonable stock of IPv4 addresses, it’s not surprising the countries services providers have been procrastinating.
But anyway, the important question: Does it actually work?
A Cisco 877 I left here a number of years ago ought to be up to the task.
Router(config-if)#do show ipv6 dhcp interface
BVI1 is in server mode
Using pool: default
Preference value: 0
Hint from client: ignored
Dialer0 is in client mode
Prefix State is OPEN
Renew will be sent in 10:44:15
Address State is IDLE
List of known servers:
Reachable via address: FE80::200:F:FC00:0
IA PD: IA ID 0x000B0001, T1 43200, T2 69120
preferred lifetime 86400, valid lifetime 86400
expires at Jul 02 2013 10:33 AM (81855 seconds)
Information refresh time: 0
Prefix name: snap-provided-prefix
Prefix Rapid-Commit: disabled
Address Rapid-Commit: disabled
If not, it may be necessary to up/down the Dialer0 interface.
So now we’ve got a prefix, but we can’t do anything with it yet. Let’s add some more stuff, in particular the default route for IPv6:
The last one is a bit of an odd command. The expression “::1000:0:0:0:1/64″ sets the last 80 bits of the interface’s address, with the first 48 bits provided by the ISP. If you wanted to allocate another subnet in your network, you could change the “1000” to “1001” for example.
The subnet is /64 as always because this configuration will end up using EUI-64 for address assignment.
It should pretty much stick straight away:
Router(config)#do show ipv6 int br
We’re almost online now, just one more thing: DNS.
I prefer to use stateless DHCPv6 for the configuration of IPv6 DNS servers (a fat lot of good for Android devices) but with RDNSS support almost non existent across mainstream platforms, we’ll have to live with it.
Here we’ll create a DHCPv6 pool just for handing out Snap’s two IPv6 DNS servers:
Router(config)#ipv6 dhcp pool default
And attach it to the BVI1 interface:
Router(config-if)#ipv6 nd other-config-flag
Router(config-if)#ipv6 dhcp server default
Address configuration is done by ICMP in this configuration, so we’ve got to set the other-config-flag to let clients know to get the DNS servers via DHCP.
At this stage, anything connected to the network should now be online with IPv6. Windows 7+ clients do not need any additional configuration, the same should be true for most Linux distributions.
Running the “ipconfig /all” command on a Windows 7 machine confirms it’s all working nicely:
Here we can see a full IPv6 address on this client which is:
Snap’s prefix (2406:e001) plus our customer prefix (censored) plus the prefix of the local subnet I configured earler (0x1000) and finally this machine’s EUI-64, all together, making a rather long string of digits.
Now the ultimate test: Ask Mr Google that question we’ve all asked at some point:
And there it is. Pretty impressive to be seeing that from New Zealand!
Hang on, we’re not done yet
I shouldn’t have to explain, that there’s no such thing as private IP addresses in IPv6. Everything is public.
So we should put some firewall rules in place to keep those script kiddies out of the home network. I’ve implemented this using reflexiveACLs
ipv6 access-list outbound
permit tcp any any reflect tcptraffic-out-ipv6 timeout 30
permit icmp any any reflect icmptraffic-out-ipv6 timeout 30
permit udp any any reflect udptraffic-out-ipv6 timeout 30
ipv6 access-list inbound
permit icmp any FE80::/64
permit udp any FE80::/64 eq 546
I’ve left ICMP open on the Link Local interface, in case it’s needed by the ISP for any reason, also I’ve left UDP port 546 open because that’s what’s used by the prefix delegation process.
Now apply that to the Dialer0 interface:
Router(config-if)#ipv6 traffic-filter inbound in
Router(config-if)#ipv6 traffic-filter outbound out
The above gives us back more or less the level of security we took for granted with NAT IPv4 address sharing.
Getting it working on Android devices
Because Google still have their head up their arses when it comes to the matter of DHCPv6 support, and Cisco not having implemented RDNSS in IOS until v15.4 (the last version for Cisco 877 was 15.1) – the easiest option to make this work is to configure IPv4 DNS servers (configured by DHCPv4) which will give out AAAA records in DNS responses.
Many ISPs (Including Snap’s) don’t. So you’ll have to find some others.
I’ve been using these displays for 15 years now, but every time, that same old problem comes up: They’re just a bit annoying to interface with.
Despite them being difficult to interface with, HD44780 and compatible clones remain the defacto standard for character LCDs, with no improved replacement in sight.
There are a lot of ‘Backpack’ boards on the market these days which offer an I2C, SPI or even RS232 interface. Although these boards have made interfacing simpler in electrical terms, the software side of things is typically made worse.
is pretty simple. The HD44780 is a “Motorola bus” peripheral, notable by it having the “E”, and “R/W” signals. They’re only found on 6800 or 68000 derivative microcontrollers.
Strictly speaking, it is only these Motorola processors that can properly interface with an HD44780, for everything else, some kind of fudge is needed.
I’ve personally been endlessly churning out routines to drive these signals under software control, because despite having used scores of different brands of microcontrollers over the years, not one of them has had the magic Motorola bus these displays require.
Recently I found this page which details how to bodge an HD44780 onto an “Intel bus” microcontroller (an AVR in his example).
Let’s have a look at the issue in the simplest possible terms. There are three signals that differ between Intel and Motorola bus, all of which are used by the HD44780.
On Motorola bus, the intention to either read or write to the peripheral is indicated at the same time as the address lines are setup, then, a single signal ‘E’ triggers either the read or write.
On Intel bus, there is no R/W signal, instead the ‘E’ signal is replaced by two separate read and write signals.
On both a Motorola and Intel system, the RS signal can be connected to the least significant address line (A0), but when it comes to the read and write, we have a slight problem. By the time we know whether or not the CPU is going to read or write, it’s too late, and no, we can’t just deliver the R/W signal at the same time as ‘E’ because, that’s violation of the timing requirements.
We’re left with the unsolvable problem of not having a replacement for R/W. So that’s it. It’s impossible to directly translate Intel bus to Motorola bus.
Because the HD44780 only has one address line, there’s a simple bodge which offers an acceptable solution to this problem:
Here, we’ve used an extra address line to drive R/W, and RD and WR are NAND’d into E.
This means the HD44780’s two registers end up with four addresses, two read, and two write. It’s not a direct conversion, but at a pinch, I’ll take it.
Why go to all this all this effort?
For the most part, yes, this is extra hassle to connect a character LCD, but let’s look at the difference this makes to the software.
This is an example of the minimum code which writes a character to the display using the most common ‘bit bang’ mode. This was taken from my 8OD project, which originally was demonstrated using an HD44780 with this method.
That’s an awful lot of fud just to write a single byte.
Let’s have a look at the code needed to do the same thing when the HD44780 is memory mapped – attached to the processor bus.
void lcd_data(uint8_t data)
while (inp(LCD_CMD) & CMD_BUSY);
Blimey. After attaching the LCD properly, all of that is reduced to 2 lines of code, and depending on the platform and implementation, orders of magnitude better performing. We no longer need any software delays, not even for initialisation. Bus timings are now all native, and we can poll the ‘Busy’ bit on the HD44780 for everything else.
8OD Specific Solution
The board I’m currently doing this work on is 8OD, based on the Intel 8086. Fortunately I’ve got a honking big CPLD on this board, allowing me to experiment with glue logic.
As an added bonus, the 8086 has a signal which directly replaces R/W: DT/R. DT/R is intended for driving bus transceivers but happens to provide the missing timings needed to generate a perfect R/W signal.
Pretty quickly, I whipped up a VHDL module which ties together all of this logic, into a perfect Intel to Motorola bus converter.
After 15 years of using HD44780’s, this is the first time I’ve ever seen one memory mapped.
One thing I never got around to when I originally published the details of this project, was any kind of disclosure of what’s in that CPLD. The thing is, that it’s pretty much all VHDL, and not very interesting to look at.
Recently I’ve re-worked it such that it now has a fairly tidy top level schematic. Sure, it still doesn’t reveal every intimate detail of how to build 8OD from primitive logic gates (that would require hundreds, if not thousands of individual gates), but it does give at least some idea of what is going on inside there.
Recently I started looking into the next phase of development I wanted to do on 8OD, specifically, to make a bus available on the 36-pin header and start attaching peripherals to it.
As I looked back through the design, something jumped out at me…
U9 is a not-entirely-necessary, but nice to have bit of logic which performs bus steering, to make the addressing of 8-bit peripherals easier. It is effectively a bus transceiver, although not quite performing the same role as an actual bus transceiver.
The signals made available on the 8086 for the control of bus transceivers, are N.C. Well well…
Diving back in the the CPLD’s VHDL code, I can see that I’ve derived the control signals for U9 from the 8086’s bus control signals, not the dedicated transceiver control signals. This of course is less than ideal, but is it actually a problem? Let’s get those timing diagrams out and have a look.
First stop, the 8086 read timings. Straight away, it doesn’t look good. Because the #OE’s of the transceiver are asserted for the period of the #CS signals of the two peripherals behind it, the de-assertion of #RD will reverse the direction of the transceiver while it is still driving the bus.
Therefore the above diagram shows that this inappropriate use of RD# causes a potential bus clash between peripherals and the transceiver (U9), whereas DT/#R (Not used by my design) is well clear of any such eventualities. How the heck did I miss that?
But as previously stated, this only highlights a potential clash. Because the peripheral is assumed to be driving the bus during this period, we have to examine it to determine if there is an actual clash. Let’s take a look at the SC16C554 datasheet:
That’s not good, it’s now very likely that there is a clash. In terms of the UART (U10), this is clearly represented and quantifiable by the symbol ‘t12h’.
Which is 15 nanoseconds per bus cycle.
The only remaining hope lies within the propagation times of U9 (SN74LVC16T245). I’ve got a pretty good feeling there isn’t going to be any good news there either.
And there it is. U9 can change direction significantly faster than U10 can release the bus. There’s officially a major problem with my design.
I’m a little miffed about this. I really thought the 8OD.1 board was free of mistakes, certainly, it didn’t show any problems during my testing. In some respects, what I’ve just found is the worst kind of problem, as it is likely to be missed during design verification, and will lead to premature failure, after the product has been shipped.
It’s just as well I didn’t mass produce these things.
So what next?
And a couple of pull-up resistors. This time I checked the need for these carefully, as I was keen not to get my backside kicked by yet another quirky detail, specifically that control signals tend to float during certain periods, i.e. Hold Acknowledge, Reset.
And there they are on the original SDK-86 schematic, circa 1978.
I’ve now got DT/#R connected to the DIR signals on U9, and #DEN to the CPLD to control the #OE signals for U9. This is fairly faithful to the original guidelines, and will make these boards reliable in the long term.
So now I’ve got to revise the board. I guess this isn’t such a bad thing, as there’s already a list of things I want to improve with the current design.
Moral of the story
8OD’s 8-bit bus spur was a design annoyance, which I didn’t put as much thought into as I should have. As is always the case when designing complex electronics: If you haven’t thought of absolutely everything, there will always be a major problem hiding in the overlooked details.
With the ROVA USB-TOOLS programmer expensive and sold by few vendors, in 2013 I added support for Parallel port programming,
but with PCs equipped with Parallel ports increasingly rare, but this hasn’t been enough.
Recently I was contacted by a knowledgeable, well equipped reader by the name of Rajko Stojadinovic – who has added a frequently requested feature to ROVATool:Even more programming hardware options.
FTDI Based boards
This is a new option, which utilises inexpensive FTDI based boards for programming Realtek devices.
(Or pretty much anything else with a Cypress EZUSB-FX2 on it) This option isn’t entirely new, but details how it’s possible to convert one of these inexpensive dongles into a ROVA USB-TOOLS programmer.
There aren’t any! Other than that I’ve split out ROVATool into its own download, as most of the users of this suite use the tool for RTD2660 platforms, there is little point in bundling the ROVAEdit tool with it, which is not applicable.
After about an hour of sending thousands of web requests, bingo, the mystery character appeared, and the logic analyser breakpoint I set had triggered. I was pleased to see a clean and clear write of 0xFF to 0x20481 – the exact value to the location I suspected was being trashed.
After scrolling through reams of bus transactions and correlating it to a dissassembly, which I used Watcom’s ‘wdis’ tool to create, the problem was becoming apparent.
Although there was only one problem that caused the original fault, I spotted another while I was at it:
1 – I’d forgotten to push the AX register to the stack in the ISR. A bit of a bummer for any interrupted code that was using it.
2 – The second problem had me feeling like I’d failed computer science 101. I’m using the W5100’s interrupt so the application doesn’t have to waste time polling the SPI for stuff to do. Generally speaking the interrupt handler adhered to good practise – Doing no real work, just flagging stuff for the main routine to do.
The oops? I was still reading from the SPI in the ISR, and the main routine code was too, so one thread of w1500_read() would be interrupted by another copy of w5100_read(), stuffing the whole thing up, because the SPI controller and the W5100 are a shared resource, and I wasn’t disabling interrupts for the execution of the non interrupt context version.
In the end I moved the W5100 flags ‘read’ out of the ISR, and into the main loop, instead just flagging the interrupt pin in the ISR, which is safe.
Did I really need a logic analyser for this? Not really, but heck, once I had a capture, it helped me find both problems in minutes.