How to Develop Bots - Guide

Changing from C# to Elm is a pretty significant transition, and can be a big pill to swallow for most. I’m very familiar and comfortable with C#, but I’ve never seen Elm. Is there a location where you’ve created an introduction on how to develop bots in this new way?

I see that you linked to a guide ( but this really only seems to lightly cover setup of the development environment, but says nothing really on how to actually develop anything. It’s like this guide on how to develop EVE Online bots is saying “here’s the steps on how to setup Elm, and once that’s done you just start developing a bot”. It doesn’t actually touch on any development.

Some of this may be due to the fact that you just transitioned to Elm suddenly, and some may be my own ignorance around Elm. Either way, I feel like there’s a huge disconnect here, and I think the community would appreciate a more detailed guide on how to actually develop a bot once their environment is setup.

Is this something you can provide? I’m also willing to help create some such content, but I’d need your help in understanding Elm, the structure of the files in your bots, etc.

Thanks for your consideration on this.

P.S. It might be better to have a set of C# libraries that support various functions in EVE Online rather than just introducing another programming language. I’m not sure that will go very far in making non-developers’ lives any easier. (I hope I’m wrong)


[I moved this from the topic about the Mission Running bot, as I see not how it is related to that topic]

Thank you @csharper for sharing your experience. It is very helpful, as you can see in todays expansion of the guide. Since you mentioned your familarity with C#, I also describe the architecture parts which are different in Elm.

Yes, I provide a detailed guide on how to develop bots. The guide is based on peoples questions, so the level of details depends on the questions I see. I continue to expand the guide to cover new questions as they come up.
And thank you for offering to support this. If you can point out any questions which are not yet covered in the guides, that seems an effective way to help. After your post today, I have some remaining open questions to address, but I might run out of questions next week.

For today’s addition to the guide to address the first part of today’s questions, see the post below.

Bot Architecture

Before we look at any code, let me give you a high-level overview of how a bot works and how it is structured.

A bot is a program which reacts to events. Every time an event happens, the engine tells the bot. Given this information, the bot then computes its new state and a response to this event.

This event response is given to the engine and contains the following two components:

  • A status message to inform about the current state in a human-readable form. When you run a bot, you can see the engine displaying this message.
  • A list of tasks for the engine to execute.

This event/response cycle repeats for every event happening during the operation of the bot.

Some examples of events:

  • The user sets the bot configuration (as explained in the guide on how to use bots).
  • The engine completes executing one of the tasks it received from the bot in an earlier cycle. The event contains the result of the execution of this task.

Examples of tasks the bot can give to the engine:

  • Take a screenshot of a window of another app on the system.
  • Read the contents of another process’ memory.
  • Send a mouse click to a specific position in a window in another process.
  • Simulate pressing a keyboard key.
  • Start a new Windows process, specifying the path to an executable file.
  • Stop another process on the system.

As we can see from the examples above, these events and tasks can be quite fine-grained, so you might see the event/response cycle happen several times per second.

Bot Code

File Structure

The bot code is a set of files. Some of these files are located in subdirectories. The bot code always contains the following three files:

  • src/Main.elm: When you code a bot from scratch, this file is where you start to edit.
  • src/Bot_Interface_To_Host_20190720.elm: You don’t need to edit anything in here.
  • elm.json. This file is only edited to include Elm packages (That is a way to include functionality from external sources).

You can distribute code into more .elm files. But this is not required, you can add everything to the src/Main.elm file.

Each file with a name ending in .elm contains one Elm module. Each module contains functions, which are composed to describe the behavior of the bot.

Entry Point - processEvent

Each time an event happens, the framework calls the function interfaceToHost_processEvent from the Main.elm file. Because of this unique role, this function is sometimes also referred to as ‘entry point’.

Let’s look at how this function is implemented. Usually it will look like this:

interfaceToHost_processEvent : String -> InterfaceBotState -> ( InterfaceBotState, String )
interfaceToHost_processEvent =
    InterfaceToHost.wrapForSerialInterface_processEvent processEvent

This function takes care of serializing and deserializing on the interface to the engine, and delegates everything else to the processEvent function in the same file. It translates between the serial representations used on the interface and typed values, so that we can enjoy the benefits of the type system when working on the bot code. In theory, this function could look different, because you could rename the function processEvent to something else. But we will leave this function alone, forget about it and turn to the processEvent function.

Let’s look at the type signature of processEvent, the first line of the functions source code:

processEvent : InterfaceToHost.BotEventAtTime -> State -> ( State, InterfaceToHost.ProcessEventResponse )

Thanks to the translation in the wrapping function discussed above, the types here are already more specific. So this type signature better tells what kinds of values this function takes and returns.

The actual names for the types used here are only conventions. You might find a bot code which uses different names. For example, the bot author might choose to abbreviate InterfaceToHost.BotEventAtTime to BotEventAtTime, by using a type alias.

I will quickly break down the Elm syntax here: The part after the last arrow (->) is the return type. It is a tuple with two components. The part between the colon (:) and the return type is the list of parameters. So we have two parameters, one of type InterfaceToHost.BotEventAtTime and one of type State.

Let’s have a closer look at the three different types here:

  • InterfaceToHost.BotEventAtTime: This describes an event that happens during the operation of the bot. All information the bot ever receives is coming through the values given with this first parameter.
  • InterfaceToHost.ProcessEventResponse: This type describes what the engine should do.
  • State: The State type is specific to the bot. With this type, we describe what the bot remembers between events. When the engine informs the bot about a new event, it also passes the State value which the bot returned after processing the previous event (The first component of the tuple in the return type). But what if this is the first event? Then there is no previous event? In this case, the engine takes the value from the function interfaceToHost_initState to give to the bot.
1 Like

This is a great first step. I don’t feel like I have enough information to build my own bot, regardless of its simplicity, but I think it’s absolutely helped me already. Looking forward to more coming. Keep up the great work @Viir!

1 Like

Thank you for checking the addition to the guide, good to know we are making progress! Next parts will arrive soon.

While working on the guide, I found some things to improve on the interface between the bot and the host. That is why I had added this reminder in the earlier post:

I just updated the examples and the guide accordingly:

Today I made progress with the guide. I finished an example bot, which illustrates the development of the acting side of a bot. This bot sends mouse and keyboard input to the window selected by the user.
The link below leads to the complete commit in the repository:

You now find this bot at bots/implement/templates/send-input-to-window at main · Viir/bots · GitHub

In the Bot.elm file, we have this definition of what the bot does after the target window is selected:

When you run this bot on MS Paint, you get such a picture (With an offset because I scrolled before making the screenshot):

You can see how the coordinates given in the bot code correlate with the ones in the image.
Also, since I selected two different colors in paint before starting the bot, you can see where the bot used the left mouse button (or the space key) and right mouse button.

After covering the development of the acting side of bots last week, this week, I show how to develop on the sensing side.

I added the locate-object-in-window bot template as a complete bot to demonstrate this. This bot locates objects in a game client or application window and displays their locations.

I added this bot template into the bots repository, you find it at

The screenshot below shows this bot in action. On the right side, a paint app window is open containing multiple instances of the object the bot should locate. On the left side, you see the console window running the bot which has been set to work on the paint app:


The setup in this screenshot illustrates a way to test the sensor parts of a bot: Load the training images into the paint app, start the bot and pick the paint app window as the target to bot on. Then see if the bot reads the expected values from the training image.

In the case shown in the screenshot above, I used the ‘Undock’ button in the game EVE Online as an example. This button can have different appearances in the game. To see if the bot correctly locates different variants, I placed two examples of this button on the paint canvas. The bot states all locations where it found the object in its status message so that I can quickly check for correctness.

After showing the testing process, let’s look at how the bot is coded to achieve this. As in all templates, you find the bot’s code in the src/Bot.elm file in the bot code directory.

The bot starts a TakeScreenshot task to get a screenshot of the game client or app it works in. When it receives the TakeScreenshotResult, it uses the SimpleBotFramework.locatePatternInImage function to get the list of locations where the object is found in the screenshot:

The function SimpleBotFramework.locatePatternInImage takes three arguments:

  • The pattern to search for. This describes the visual appearance of all the different instances of an object we don’t want to distinguish. In the example above, we don’t distinguish between the two different appearences of the ‘Undock’ button. We aggregate it all into the same object type, so we only use a single pattern for this.

  • The region of the window to search in. In this case, we search everywhere.

  • The image to search in. We get this value from the framework, in the TakeScreenshotResult task result.

Writing a pattern for use with SimpleBotFramework.locatePatternInImage

Such a pattern is a function which looks at surrounding pixels to see if these pixels indicate the presence of the object.
This function has the following type:

({ x : Int, y : Int } -> Maybe PixelValue) -> Bool

What we can see in this type is that it takes one argument and can only return True or False. The return value indicates the presence of the searched object or pattern. The argument is itself a function. It is the pixel querying function:

{ x : Int, y : Int } -> Maybe PixelValue

This function takes a location and returns the value of the pixel at this location. Since the queried location could be outside the image boundaries, the pixel value could be Nothing, so the type for the returned pixel value is wrapped in a Maybe.

A pixel value is composed of the three components red, green and blue. These three components are integers and can each range from 0 to 255. The values for red, green and blue are the same you can see using the color picker tool in paint.NET:
use the Paint.NET color picker too to read pixel values
The pattern function is free to do all sorts of arithmetics on the pixel values. For example, it could check the contrast between neighboring pixels by computing the difference of their values.

When writing a pattern, we don’t need to check all pixels which we would consider part of the representation of the object on the screen. We only need to check as much to avoid false positives in screenshots that the bot might encounter during its operation. A false positive in this case means be the bot locates an object in a screenshot where it should not. In other words, it returns True too often. When we see this happen, we change the pattern to be more restrictive, by adding a constraint. In many cases, checking a small fraction of the pixels can be sufficient to avoid false positives.

Meanwhile, I simplified the part about the file structure: You can place your bot code in the Bot.elm file. No need to look at the other files. To avoid distraction by the other files/modules, I placed those into a subdirectory.
You can see the entry point of the bot here:

@csharper, with the extensions to the bot development guide and the new templates, as far as I see, all your questions are answered now.
Let me know if any questions remain.

The bot templates are now all located in

I saw you posted a coding video earlier and I found that really helpful. Is there anyway you could walk through a bot start to finish and comment on what it is doing and how the flow happens? I am a programmer, but I am used to java/php/and more front end code for web. The syntax changes are really hard to absorb and even little things like indents and blank lines can break syntax in ELM it seems. Thanks any tips would be helpful. I have been able to make a few small edits. Like change the intel bot to look for bad people, not “not good people”, or read out the entries in the overview. Ultimately I want to make a mining bot that docks on first sign of bad pilots in local. but the mining bot and the intel bot seem so different. Not sure if the naming of the functions is different or if they are structurally different.

Glad to see the video is helpful.

Yes, I can make a video on this topic too.

The first point might be that everything that leads to an effect flows through the processEveOnlineBotEvent function:

processEveOnlineBotEvent :
    -> EveOnline.BotFramework.BotEvent
    -> BotState
    -> ( BotState, EveOnline.BotFramework.BotEventResponse )
processEveOnlineBotEvent eventContext event stateBefore =

As we can see in the type annotation, the return type of the processEveOnlineBotEvent function is this tuple:

( BotState, EveOnline.BotFramework.BotEventResponse )

The BotState is specific to each app; it contains the information that we want to retain between events. The type annotation for processEveOnlineBotEvent also shows that the last parameter also is of type BotState. When the framework calls processEveOnlineBotEvent, it passes the BotState it received from processEveOnlineBotEvent in the previous step. This is the only way our app can remember things: Give this value to the framework with the return value and receive it from the framework when processing the next event.

The EveOnline.BotFramework.BotEventResponse contains all the effects we want the framework to perform, like:

  • Status description to display in the UI.
  • Sending inputs to the game client.
  • Playing sounds.

I see the languages you mention are different in how they model mutations. Java, PHP, Javascript, etc. support local mutations and effects. In contrast, the framework for EVE Online here uses immutable language so that all mutations and effects flow through return values and ultimately processEveOnlineBotEvent, as explained above.

Not sure what you saw with blank lines, but indents matter for sure. I use elm-format to automate the consistent formatting and make the code easier to read. You can see it’s effect also in my video, since it is applied every time when I save a file. Since you mention blank lines, this tool also removes or inserts line breaks to make the blank lines consistent.

1 Like

i am very interested in Robot technology and want to learn this, suggest me how to do

Welcome Shams! It depends on what your Robot should do.
Here is an example: You want it to locate an object on the screen and click on that. In this case, your bot can use screenshots to perceive its environment. To develop the bot, you get screenshots for your scenario as training data. This training data set is then used to train the bot to learn how to locate the object you are interested in.
A bot is mainly developed by collecting and curating these collections of training scenarios.

Thanks for such a prompt reply.! Thanks for your answer.