A taste of hardware programming
14 Nov 2011This post was imported from blogspot.
Have you ever tried hardware programming with Verilog or VHDL? During University I had to program FPGAs with Verilog, and it struck me that hardware programming was a lot like software programming, but at the same time much different also. Instead of consuming plentiful RAM, hardware programming consumes a much more scarce resource (transistors) instead. Physics can cause your program to malfunction, but aside from that, I really enjoyed my one little hardware programming course.The fundamental difference between hardware and software is that every logic gate inherently operates in parallel, unlike software which inherently operates sequentially, and requires a special mechanism--threads--in order to do tasks in parallel. Traditionally, writing multithreaded programs has been quite difficult, due to the danger of race conditions, the danger of deadlocks, forgetting to lock data structures, and so on. So it was a little surprising that such problems don't necessarily happen in hardware programming. In fact I found that the biggest assignment I was given (to make an "alarm system") was relatively fun and easy. And what made it relatively fun and easy was (1) that the solution was small enough to fit in our FPGAs without much effort, and (2) I didn't have to "update" variables.
For instance, when the user activates the alarm system, she has 15 seconds to exit the building before the device is "armed". To make this work in a software program, you might write an "event" method and configure some sort of timer to call the method each second. When the event happens, you would check whether we're in a "countdown mode", and if so, decrement a variable representing the number of seconds left, then display the new number of seconds on the screen by explicitly calling a method to update the screen.
In a hardware program, it works a bit differently. It's been a few years since I did this in hardware, but I'll give you the gist of it. You don't need a timer "event"; instead, there is a hardware clock running at a known speed, which I'll say is 10 MHz, i.e. 10 million clock cycles per second or 0.1 microsecond per tick. So I created a counter that takes the hardware clock as input and restarts every 1/60 of a second (because part of the user interface updates 60 times per second), and when it restarts, I programmed a particular wire to be "on" (1 bit) during that clock cycle only (IIRC). I used that signal as input to a second counter that restarts every second and similarly sets a particular wire to "on" during one clock cycle per second (out of 10 million). I defined a third counter to represent the countdown, which decreases each second if the countdown is active. The counter is kept in BCD format (binary-coded decimal, 4 bits per decimal digit) so that no math is required to convert the digits into a format suitable for display to the user.
The "screen" was a pair of 7-segment displays, i.e. a display that can show two characters (the numbers 0123456789 and, if you're creative, the letters AbCdEFgHIJLnoPqrStUy.) The "screen" showed only 16 bits of information total (two digits and two dots), so I simply connected 16 of the FPGA's metal pins directly to the 7-segment display. To draw something on the screen, my hardware program merely had to set those pins to "on" or "off" as appropriate.
I didn't have to execute any "commands" to make a number appear on the seven-segment (numeric) display. Instead, I wrote "functions" (not really functions, but blocks of code that represent circuits or sets of circuits) that did the following:
- Examine the state of the device and, if a countdown is in progress, produce the countdown value as output (the output is 5 bits per digit while the input is 4 bits per digit, so I just set the extra bit to 0). If a countdown is not in progress then something else is selected as output, such as part of a message (e.g. "rEAdy" might be scrolling across the screen. Actually, displaying anything but numbers was not part of the assignment; I just added the ability to display messages for fun.)
- Those two 5-bit numbers become the input to an output mapper, which converts each 5-bit number to a 7-bit character.
- The 7-bit character is passed directly to the output pins, unless it is modified to display an edit cursor (another visual flourish that was not part of the assignment, and not relevant to this blog post either, for that matter).
Instead I simply declared that
- the thing that does the countdown is wired to the thing that selects a 5-bit output;
- that thing is wired to a pair of things that select 7-bit outputs; and
- the two 7-bit outputs are wired to the thing that chooses the 16-bit output.
The fact that stuff happens automatically, without you having to remember to issue "commands", makes programming easier, and thus hardware programming is at times easier than software programming... until you run into the physical limitations of your device. My "alarm system" had a two-character display, so I simply made a duplicate copy of the "function" that converts the 5-bit representation to 7 bits. That way, the two copies could run in parallel. This approach was viable because the display was only two digits; if the display had 10 digits or more, 10 copies of the function might not fit on the chip. If I wanted to save space, I would create only one instance of the function, and then add a special circuit to "call" the function on each digit in sequence (e.g. a different digit every clock cycle). In that case I would need to create registers to save the result of running the function on each digit. Moreover, the chip doesn't have enough metal pins to represent 10 digits, so the screen would define some communication protocol that the chip would have to follow. Potentially, it could get messy. But, the tasks that students were given were fairly easy and fun.
Ever since I tasted hardware programming (and probably before then, actually), I have wished that software could offer the same kind of easy, automatic updates, so that variables inside the program automatically propagate to the screen, or to whereever they are supposed to go. Data binding provides part of the answer, but the problem is that you don't usually want to display your program's variables directly on the screen; you want to filter, sort, and format those variables first. If the user modifies what's on the screen, reverse transformations are needed to figure out how the internal variables should be changed. Recently, I discovered that there is a .NET library that (if used correctly) will provide these wonderful automatic updates. That library is called Update Controls (at CodePlex).
Functional languages like Haskell and F# also make programming easier, generally speaking, and like hardware programming, functional languages are sufficiently different from popular "imperative" languages that it feels like something altogether different. However, functional languages don't tend to represent graphical user interfaces (GUIs) in a natural way. For GUIs, all hail Update Controls.