In this thread, I share what is on my mind concerning BotLab, discovering and developing bots, in chronological order. It helps me organize my thoughts and remember useful discoveries.
If you see something that is missing here, something I should be aware of, don’t hesitate to message me or post on the forum in the ‘Show and Tell’ category.
People mentioned they have problems getting hash values for files.
Being able to share and identify files is essential when working on bots, so I wrote this guide on how to do that, using hashes:
Which part of the intermediate results we reuse when reading the UI tree multiple times from the same EVE Online client process. (In the Sanderling app, we reuse the address of the root of the UI tree, because this address does not change. By reusing this part we save time so that subsequent readings of the complete UI tree can be completed in about 200 milliseconds.)
Where the transition is between the intermediate representation (partial python objects) of the memory measurement and the representation found in the bot developers API. Observations in the last three years showed this is an important boundary: When we adapted the memory reading to new client versions from CCP, we did that in the part that happens after reading the partial python objects.
The next improvement planned there is to reduce the time to get feedback on experiments and to support easier exploration of the APIs. It looks like there will be a REPL which supports incrementally building up functionality which can be composed into bots. The same interface will also support quick inspection of results from memory reading.
As mentioned earlier, I am working on a new version of the EVE Online bot framework, to solve the issues with understanding bot code and bad surprises at runtime, and recording and inspection of bot operation.
This is a report on the recent progress with developing this framework:
I explored possible solutions using the example of a warp to 0km autopilot bot. Today I uploaded a rough draft of how this framework could be used. This draft concentrates on static parts, framework API and programming language. Other aspects are not presented in there. The programming language part is already well demonstrated as this draft contains said autopilot bot, I copied the bot code below:
module Main exposing (botStep)
{-| This is a warp to 0km auto-pilot, making your travels faster and thus safer by directly warping to gates/stations.
The bot follows the route set in the in-game autopilot and uses the context menu to initiate warp and dock commands.
To use the bot, set the in-game autopilot route before starting the bot.
Make sure you are undocked before starting the bot because the bot does not undock.
-}
import SimplifiedSanderling
exposing
( BotEvent(..)
, BotEventAtTime
, BotRequest(..)
, InfoPanelRouteRouteElementMarker
, MemoryMeasurement
, MouseButtonType(..)
, centerFromRegion
, mouseClickAtLocation
)
-- This implementation is modeled after the script from https://github.com/Arcitectus/Sanderling/blob/5cdd9f42759b40dc9f39084ec91beac70aef4134/src/Sanderling/Sanderling.Exe/sample/script/beginners-autopilot.cs
{-| We need no state for the autopilot bot
-}
type alias State =
()
init : ( State, List BotRequest )
init =
( initialState, [] )
initialState : State
initialState =
()
botStep : BotEventAtTime -> State -> ( State, List BotRequest )
botStep eventAtTime stateBefore =
case eventAtTime.event of
MemoryMeasurementCompleted memoryMeasurement ->
( initialState, botRequests ( eventAtTime.timeInMilliseconds, memoryMeasurement ) )
botRequests : ( Int, MemoryMeasurement ) -> List BotRequest
botRequests ( currentTimeInMilliseconds, memoryMeasurement ) =
case memoryMeasurement |> infoPanelRouteFirstMarkerFromMemoryMeasurement of
Nothing ->
[ ReportStatus "I see no route in the info panel. I will start when a route is set."
, TakeMemoryMeasurementAtTime (currentTimeInMilliseconds + 4000)
]
Just infoPanelRouteFirstMarker ->
case memoryMeasurement |> isShipWarpingOrJumping of
Nothing ->
[ ReportStatus "I cannot see whether the ship is warping or jumping."
, TakeMemoryMeasurementAtTime (currentTimeInMilliseconds + 4000)
]
Just True ->
[ ReportStatus "I see the ship is warping or jumping, so I wait."
, TakeMemoryMeasurementAtTime (currentTimeInMilliseconds + 4000)
]
Just False ->
botRequestsWhenNotWaitingForShipManeuver
memoryMeasurement
infoPanelRouteFirstMarker
++ [ TakeMemoryMeasurementAtTime (currentTimeInMilliseconds + 2000) ]
botRequestsWhenNotWaitingForShipManeuver : MemoryMeasurement -> InfoPanelRouteRouteElementMarker -> List BotRequest
botRequestsWhenNotWaitingForShipManeuver memoryMeasurement infoPanelRouteFirstMarker =
let
announceAndEffectToOpenMenu =
[ ReportStatus "I click on the route marker to open the menu."
, mouseClickAtLocation
(infoPanelRouteFirstMarker.uiElement.region |> centerFromRegion)
MouseButtonRight
|> Effect
]
in
case memoryMeasurement.menus |> List.head of
Nothing ->
[ ReportStatus "No menu is open."
]
++ announceAndEffectToOpenMenu
Just firstMenu ->
let
maybeMenuEntryToClick =
firstMenu.entries
|> List.filter
(\menuEntry ->
let
textLowercase =
menuEntry.text |> String.toLower
in
(textLowercase |> String.contains "dock")
|| (textLowercase |> String.contains "jump")
)
|> List.head
in
case maybeMenuEntryToClick of
Nothing ->
[ ReportStatus "A menu was open, but it did not contain a matching entry." ]
++ announceAndEffectToOpenMenu
Just menuEntryToClick ->
[ ReportStatus ("I click on the menu entry '" ++ menuEntryToClick.text ++ "' to start the next ship maneuver.")
, mouseClickAtLocation (menuEntryToClick.uiElement.region |> centerFromRegion) MouseButtonLeft |> Effect
]
infoPanelRouteFirstMarkerFromMemoryMeasurement : MemoryMeasurement -> Maybe InfoPanelRouteRouteElementMarker
infoPanelRouteFirstMarkerFromMemoryMeasurement =
.infoPanelRoute
>> Maybe.map .routeElementMarker
>> Maybe.map (List.sortBy (\routeMarker -> routeMarker.uiElement.region.left + routeMarker.uiElement.region.top))
>> Maybe.andThen List.head
isShipWarpingOrJumping : MemoryMeasurement -> Maybe Bool
isShipWarpingOrJumping =
.shipUi
>> Maybe.andThen .indication
>> Maybe.andThen .maneuverType
>> Maybe.map (\maneuverType -> [ SimplifiedSanderling.Warp, SimplifiedSanderling.Jump ] |> List.member maneuverType)
Today I worked on removing a bottleneck from EVE Online bot development. In the past, adapting the memory measurement parsing code required a developer to set up Visual Studio and use a .NET build. To remove this friction, the parsing code is moved into the bot scope, where it can be changed as easily as your own bot code.
The commit in the Sanderling repository adds an example of how to write the serialized memory measurements to files, so these can be easily imported when experimenting with changes to the parsing code: https://github.com/Arcitectus/Sanderling/commit/4c848131cd3248a42a25c6b86536ac53eca7a4af
After running this new derivation code on a process sample, you will find the files partial-python.json, sanderling-memory-measurement.json and sanderling-memory-measurement-parsed.json in a subdirectory named after the process.
Edit:
I improved the names for the derivations to:
Avoid having two different kinds of ‘memory measurement’.
Clarify that the reduction happens from partial python to get the other versions.
Today I started the guide on developing EVE Online bots, describing how to set up the programming tools to efficiently work on EVE Online bots. You can find it here:
With the new software, a generic interface between bot and host is introduced, which offers more flexibility in the functionality integrated into a bot.
One concrete example of how this flexible interface improves over the old Sanderling app: This allows coding custom logic to choose a windows process in case there are multiple instances of the game client process present.
Most developers won’t need to use this new interface directly but will use it indirectly over libraries encapsulating these low-level functionalities for easier use.
An example of such a library is the one for EVE Online, contained in the warp-to-0 auto pilot bot, in the file Sanderling.elm:
Recently there was a change in EVE Online which affected several users: After an update of the EVE Online client, some bots had issues undocking a ship from a station. @Aseratis and @Kaboonus fixed this problem as you can see in these posts:
I released version 2019-05-29 of the bot framework and the botengine console. The new version brings several improvements:
To improve the readability of the bot code, cleaned up the interface between bot and host (engine).
Added the bot side interface to set a configuration for a bot.
Better support for identifying bots: The bot ID is now displayed in the user interface. Also, a change in the packaging of a bot from local files fixes nondeterministic bot IDs. (The problem where the bot ID would change without changes in the files. (This problem was caused by timestamps being added at the time of loading the bot)).
Recently some people reported problems with downloading bots to load them into the engine. Today’s release of the new BotEngine Console adds a new feature to address this problem. Instead of downloading the bot from Github manually, you can now let the BotEngine Console do this for you. To use this new feature, specify the Github URL with the --bot-source parameter. The engine will then take care of the download:
This bot source looks like a URL. I try to load the bot from Github
I found 8 files in 'https://github.com/Viir/bots/tree/32559530694cc0523f77b7ea27c530ecaecd7d2f/implement/bot/eve-online/eve-online-warp-to-0-autopilot'.
I loaded bot 266EDDE2CCA2F71BC94DFD941F469E5F3DA20DACFD08A08E680B0C09646DF6C1.
Starting the bot....
I also updated the guide to explain the different kinds of bot sources and include examples:
Recently people reported issues slowing them down when using bots. This new release of the botengine console adds a feature to fix these bot configuration issues. The new feature allows you to:
Configure a bot using the command-line interface, by adding the new --bot-configuration parameter.
Configure a bot without changing the bot code, which in turn means without changing the bot ID. This means we can compare bots easier, by using the now more stable bot ID.
I updated the guide to illustrate how this new feature is used, including an example command line:
After the improvements on bot discovery and operation as reported in June, work continued on the bot development side in the last two weeks.
@TOBIAS95 had some questions, and while answering these I used the opportunity to write in more detail how the process of bot development works.
Some of the topics covered:
How do we describe what a bot should do, in a way that avoids misunderstandings between people?
How can I take and save the screenshots used to describe a bot?
How can I share the files I collected with other people?
I posted this guide here:
This week, I explored the bot implementation process further, based on the example screenshot given last week. This exploration answered several questions about developing bots:
How can we easily model example files to use in automated tests, for example when testing image file decoding implementations? The elm app implemented here displays a Base64 encoding of the file chosen by the user. The elm module containing the automated tests demonstrates how such a Base64 representation of a file is converted back to the original Bytes sequence representing the file.
How do we decode an image which was stored into a BMP file using Paint.NET? The automated tests added here model files and expected elm values resulting after decoding these image files.
How do we locate user interface elements in screenshots? The pattern model implemented here supports flexible configuration of a search which works with screenshots from video games like EVE Online.
How can we make it easy for developers to find the right search configuration for their use-cases? The graphical user interface implemented here supports quick configuration and testing of image search patterns.
I made progress with the guides for bot developers:
Last week, I started another exploration to learn how to further improve the development process for the image processing parts in our bots.
One result of this exploration is a guide which explains how to test candidate functions with example screenshots quickly. Using this approach, you can test the image processing parts of your bots without the need to start a game client. Instead, you point the test framework to an example screenshot and review the results.
I illustrated the approach to image processing and finding objects in screenshots with an example implementation and image.
A byproduct of this exploration is a demonstration of file loading (and parsing) in a bot. So if you have a use-case where you need this, the demo bot implemented here might be a good starting point.