After the improvements on bot discovery and operation as reported in June, work continued on the bot development side in the last two weeks.
@TOBIAS95 had some questions, and while answering these I used the opportunity to write in more detail how the process of bot development works.
Some of the topics covered:
How do we describe what a bot should do, in a way that avoids misunderstandings between people?
How can I take and save the screenshots used to describe a bot?
How can I share the files I collected with other people?
I posted this guide here:
This week, I explored the bot implementation process further, based on the example screenshot given last week. This exploration answered several questions about developing bots:
How can we easily model example files to use in automated tests, for example when testing image file decoding implementations? The elm app implemented here displays a Base64 encoding of the file chosen by the user. The elm module containing the automated tests demonstrates how such a Base64 representation of a file is converted back to the original Bytes sequence representing the file.
How do we decode an image which was stored into a BMP file using Paint.NET? The automated tests added here model files and expected elm values resulting after decoding these image files.
How do we locate user interface elements in screenshots? The pattern model implemented here supports flexible configuration of a search which works with screenshots from video games like EVE Online.
How can we make it easy for developers to find the right search configuration for their use-cases? The graphical user interface implemented here supports quick configuration and testing of image search patterns.