by Floris Wouterlood – November 12, 2020
With stop-motion animation (series of still image frames displayed in rapid succession) we can create the illusion of an object that moves. We use here the term ‘scene’. A scene can for instance be manufactured from the archetype galloping horse photographs pictured in 1876 by Eadweard Muybridge*. Scenes can be brought to life in a zoetrope** or, in modern form, in animated GIFs and, of course, in animated movies. Here we present four scenes each consisting of a number of scene frames: a Tyrannosaurus rex performing in walking, running, howling and roaring. Each scene runs on an Arduino. The very compact program memory of an Arduino however limits the number of scenes to one per sketch. With a Wemos D1 mini that has an ESP8266 chip as its engine, sufficient memory is available to chain scenes, thus producing with a few tricks a T-rex lively rumbling around.
Figure 1. The T-rex project: Frames captured from a T-rex movie are used to
create scenes for the Arduino. Note the enormous data reduction. The final scene frames (insets) have dimensions of 104 pixels wide and 49 pixels high, at 1-bit color depth.
The common Arduino (Uno, Nano) is equipped with a tiny 8-bit processor (ATmega 328) that addresses 1kB EEPROM, 2kB SRAM (‘dynamic memory’) and 32,768 bytes of so-called ‘program memory’. 30,720 bytes of this memory is user-addressable and available to store a complete animation engine plus the scene image frames that we want to display. As the popular OLED type of display for Arduinos typically features dimensions of 128*64 pixels a simple calculation learns that one full screen image frame under pixel-on/pixel-off conditions (1-bit color depth) requires 128*64*1 = 8,192 bits, or the equivalent of 1,024 bytes: 1 kB. At 1kB per frame a 30,720 available space would host 30 frames. Of course memory must be reserved for the animation instructions and for display control. In a previous project I managed, by using frames with smaller dimensions, to squeeze 26 scene frames into a sketch**
In contrast to the Arduino the compatible Wemos D1 mini has an ESP8266 architecture, 4 MB of SRAM and 50 kB of flash memory. This ‘wealth’ of memory space compared with the Arduino invites to experiment with scenes. For compatibility concerns we use here for every scene the Arduino memory space limitation. Individual scenes should be run on an Arduino, that is, both the animation instructions and all the scene’s picture frames, while concatenated scenes may be run on a Wemos D1 mini. Examples of both will be provided.
Figure 2. Procedure to prepare a scene. A, Movie frame captured from a movie featuring a T-rex. B, Closed contour drawn as overlay. C, Contour only. D, All contours in a scene are aligned to get fluent transitions, E, Export of every frame as monochrome BMP image, F, Bitmap in E converted to c-file. The c-file is stripped of headers and copy-pasted into the sketch.
About Tyrannosaurus rex
Sixty six million years ago this imperial carnivorous dinosaur dominated the scene in an area of the earth that is today shared by Colorado, Wyoming and Montana, United States of America. T-rex was a big creature with impressive jaws and equally impressive teeth. In several natural history museums across the world fossilized bony remains have been assembled into huge awe-inspiring skeletons. Imagine yourself walking around 66 million years ago in North America to stumble upon a hungry T-rex. Run for your life ! Cinema movie makers, prominently those involved in the Jurassic Park series, have gone far to recreate these animals on their computers. Sample 3D animated dinosaurs of all sorts that show off the artist’s talents can be found on YouTube. One of these movies was used as the basis for the current scenes. One extracted picture frame of the movie used here is pictured in figure 2A. This frame further serves to explain the processing steps necessary to transform the scene into a series of Arduino instructions.
The challenge in the current project was to achieve stop-motion animation featuring T-rex on the Arduino Uno and the Wemos D1 mini and to see how well the scenes perform with these processors and with several displays. The ‘standard’ display is the popular 128*64 SSD1306 OLED. These displays are monochrome by design: pixels are either on or off. More interesting are color TFT displays because here color is introduced. My favorite output device in this category is the 130*130 pixel SSD1283A TFT display. The alternatives are 320*240 and 320*480 pixel TFTs.
Reduction, reduction, reduction
Any stop-motion animation consists of a sequence of individual frames. Frame dimensions and the number of frames shown contribute overwhelmingly to file size. Remember the 32 kB of memory in an Arduino and compare that with tens of megabytes of MP4 format ‘dino’ movies available on YouTube. Next, assume you have a 320*480 TFT display. One monochrome frame in this format requires 320*480*1 / 8 = 19,2 kB of memory space. Clearly the road to create successful scenes in the Arduino world demands radical reduction of the number of frames per second, frame size and color depth. Make things small, simple and monochrome and they will work.
Procedure in brief
The principal steps are the following: download movie – extract frames – select frames – segment – scale down – export frames as monochrome bitmaps – convert these bitmaps into c-arrays – import in Arduino sketch – run animation.
Downloading YouTube movies to local storage medium is possible with a variety of plugins or extensions to existing browsers. Be aware of the risk that some of these extensions may have adverse features such as sneakily installing search bars, advertisement additions or installing malware right away. Be careful or immediately uninstall/reinstall your browser if suspect activity is detected. Having a dispensable virtual machine available for his purpose may be handy. Usually movies are in MP4 format but there is a jungle of fancy formats awaiting you in the video universe.
Several programs (e.g., EZGif, VLC, ImageJ) offer functionality to extract frames from movies. In addition there exist on line extraction services. One must experiment here. In Linux the program ‘ffmpeg’ offers all the tools (in a terminal console window because it is a command line utility) to extract frames from MP4 movies fast and without any interference by commercial programs. The downloaded movie ‘dino.mp4’ was converted by ffmpeg into a series of 1381 frames.
Typical ffmpeg command:
ffmpeg -i dino.mp4 dino_%4d.png
Selection of scenes
Once a movie has been converted into a long series of movie frames a selection has to be made to get those frames that matter. Several conditions apply:
A recommended way to test a scene for the required fluent transition from the end phase of a movement into the beginning is to make an animated GIF and to judge the movement presented in that GIF. For this purpose I use the freely available, open source program ‘Fiji’. Fiji is a Java platform based scientific image analysis program developed originally by Wayne Rasband at the U.S. National Institutes of Health in Bethesda, MD, USA (the program was originally named ‘ImageJ’). Fiji is available for Windows and Linux.
Four scenes were defined: ‘walking’, ‘running’, ‘howling’ and ‘roaring’. ‘Walking ‘consists of a series of 13 frames, ‘running’ features 9 frames, ‘howling’ is covered in 10 frames while ‘roaring’ is the most complex scene with 23 frames.
Dimensions of the scene frames
An OLED screen is 128 pixels wide and 64 pixels high. A single full-screen frame consumes 1kB of Arduino user available memory. Thus, ‘roaring’ with its 23 frames would need 23 kB of memory for the entire series of scene frames. Entering some offset for the animation engine and display control apparently wakes up the Arduino compiler because it starts issuing the warning that the sketch might be unstable because it is nearly filling available program memory.
Nibbling off pixels at the edges of scene frames results in tremendous file size savings. As the width of an OLED frame is expressed in units of 8 pixels (8 pixels fit one byte), the entire display width takes 128/8 = 16 bytes worth of instructions. Nibbling four pixels away on the left and four pixels on the right of a display frame results in a scene frame with a width described in 15 bytes. One byte saved here, but multiplied with the height of the picture (64 lines) brings us to a total of 64 bytes saved per scene frame. On 23 frames of the complete scene such a saving frees 1,472 bytes or more than the equivalent of one extra frame!
During the preparation of the scenes it became apparent that it might be possible to contain all T-rex movements within a rectangle with dimensions 104*49 pixels. As coding for each 104*49 pixel scene is slightly weird: 13 bytes horizontally (1 byte codes 8 pixels) and 49 bytes vertically (1 byte codes 1 line), one such scene frame requires 637 bytes in code. The scene ‘roaring’ (23 frames) therefore occupies 14,651 bytes of the available memory in an Arduino while the full-screen size scene would have required 23,552 bytes.
Movie frame to c-array
Each scene frame has to be coded in Arduino C++ language. The procedure is to convert frames extracted from movies into c-array files. These c-arrays can then be opened with an ascii editor and copy-pasted into the appropriate sketch.
There are two ways to process an image captured from a movie to c-array format:
I favor the vector graphic way. Vector drawings are endlessly and reversibly scalable without losing any detail. Scaling in bitmap manipulation programs is challenging and often arbitrary. Working with vector contours makes it very easy to shift a dino contour in one scene frame a little bit to improve the matching with the contour in the next frame. I call this manipulation ‘corrective centering’. Repetitive corrective centering is necessary to make sure that the final animation runs smoothly. In a raster graphics environment corrective centering is very difficult and time consuming.
Vector segmentation procedure
The (vector graphics) procedure developed to draw contours and further manipulate frames is illustrated in Figure 2. First, movie frames selected for a scene are imported, each frame in its own layer. This results in a stack of ‘movie frame’ layers of which each one contains one of the imported movie frames. The next step is to insert additional layers (‘contour layers’) between the movie frame layers. In each contour layer the contour is then drawn of the dino visible in the movie layer immediately below it (fig. 2B). Thus, the end result of this operation is a scene vector file containing a stack of movie frame layers alternating with contour layers. The movie frame layers can subsequently be toggled ‘hidden’ or outright deleted (fig. 2D).
One extra (‘bottom’) layer that contains a rectangle with dimensions exactly matching 104*49 pixels on export is added to the contour layer stack. Export of this bottom layer together with one of the dino contour layers as monochrome bitmap (BMP format filter) produces one monochrome scene image frame, 104 pixels wide and 49 pixels high, ready to be incorporated in the animation.
Alignment to ensure fluent frame-to-frame transition
A vector program allows the user to move elements freely around, in our case any of the T-rex contours, while keeping each contour neatly in its own contour layer. The T-rex contours are realigned relative to each other (fig. 2D, ‘corrective centering’) by repositioning them slightly and if necessary adapting parts of contours. The aim here is to produce a maximally smooth frame-to-frame transition in the final stop-motion animation. In this stage of the exercise we have to make maximal use of the space offered by the 104*49 pixel frame rectangle. Every saved pixel counts!
Export to monochrome BMP
After this exercise each T-rex contour is filled with black color (the red color for contours in figure 2 is only for illustration purposes) and exported together with the 104*49 rectangle in the bottom layer (colored white). The format of the monochrome export image files has to be Microsoft’s BMP format at 1-bit color depth. Note that once exported from the vector drawing these bitmap images are no more nicely scalable, that is, if one attempts to do that the dino becomes a jagged stack of big black squares. Scaling, if necessary, should be performed with the vector graphics file.
Important: after-export check
An after-export check saves disappointment later on. Whether all export actions have been performed successfully can be checked in any drawing program. A fast and conclusive way is to import all frames in Fiji as a sequence and to save this sequence as an animated GIF. Any outlier frame will be rejected by Fiji.
Each scene now consists of a set of monochrome BMP images that need to be converted into as many c-array files. Conversion can be done with the program ‘lcd-image-converter.exe’ which is a Windows based program freely available at Sourceforge (https://sourceforge.net/projects/lcd-image-converter/). The alternative is to experiment with an on-line BMP-to-c-array converter. I took the lcd-image converter way (figures 2F, 3).
figure 3. Work screen of lcd-image-converter, a handy Windows utility to import 1-bit BMP vector output files and convert these into c-array format. The ‘convert’ step is illustrated.
Work flow in lcd-image-converter is as follows
Output of each cycle is a c-array formatted file with extension .c that can be opened in any ascii text editor. Handy editors for this are Gedit (Linux) or Notepad++ (Windows). The c-array is next copy-pasted into its position in the Arduino sketch.
Software used in this project
figure 4. A, Scene ‘howling’ being displayed on a 128*64 OLED (SSD1306 controller). B. Complete animation consisting of four scenes with in total 55 scene frames running on a 130*130 TFT color display with SSD1283A controller. The microprocessor here is a Wemos D1 mini with ESP8266 engine.
Downloadable sketches-1: monochrome
Each of the next four files contains a single scene. These sketches were written for the Arduino Uno equipped with a 128*64 OLED that has on board a SSD1306 controller (display visible in figure 4, A). The library used to instruct the OLED controller is <U8glib.h> by Olikraus.
Downloadable sketches-2: color
A TFT display offers the use of colors. We can render a dino every in every RGB565 color available. While any TFT display may be used, an attractive screen is the 1.6 inch diagonal transflective 130×130 pixel TFT with SSD1283A controller and SPI interface (visible in fig. 4, B). I have painted the T-rex here in a conservative green color. Dino’s are supposed to be green, aren’t they? However, there are 65,535 other colors to choose from.
An ESP8266 powered microcontroller has so much more program memory available than an Arduino that we can chain all four scenes. This is achieved in the fifth sketch:
that features all four scenes randomly concatenated, with in one scene the T-rex changing color. This sketch is too big to fit an Arduino’s memory, but with the Wemos D1 mini, or other ESP8266 NodeMCU family members compiling is no problem, and one can enjoy the T-rex here involved in different actions
T_rex_project_scenes.zip with all five sketches.
*Eadweard Muybridge’s galloping horse on Arduino 128*64 LCD, OLED and TFT displays – by Floris Wouterlood -https://thesolaruniverse.wordpress.com/2020/05/23/eadward-muybridges-galloping-horse-on-arduino-12864-lcd-oled-and-tft-displays/
**Zoetrope (https://en.wikipedia.org/wiki/Zoetrope): an animation device that produces the illusion of motion by displaying a sequence of drawings or photographs showing progressive phases of that motion.
** ADA – Arduino Dino Animation – by Floris Wouterlood – Thesolaruniverse, July 21, 2020 – https://thesolaruniverse.wordpress.com/2020/07/21/ada-arduino-dino-animation/