Using Bots for Eve Accessibility

Hi ALl,

I am a newbie to EVE online, interested in the potential of using bots to help me play the game as a totally blind user. The default game client is not very accessible with my screen reading software, but it strikes me that with a bit of work it might be possible to make some parts of the experience a little easier to deal with, whether by doing mouse clicking and displaying game info via a command line, or other means.

I’m not all that familiar with what is currently possible, but the biggest limitation on my screen reader at the moment is the fact the game requires me to use OCR to read anything meaningful on the screen. This is, needless to say, difficult to do reliably, and quite tedious.

I wanted to post this topic as a means to explore what might be possible, and ways to begin doing so. At the moment my ship is in the newbie tutorial experience, and I could honestly use a bot to help me click on a wreck in game, as it isn’t showing up in the overview for reasons.

If this kind of thing is possible, I’d appreciate some thoughts on how.

Hello, and welcome, Zachary!

Thank you for taking the time to share your experience. Looking at statements from the games developer CCP, it does not surprise the game client does not help with accessibility for blind users. If there were an interface to get the information in the game client in a well-structured easily-readable way, this would make automation easier. CCP said they see automation happening and consider this a problem. But they also did not fix the game design problems that drive people to use automation. Bad accessibility remains as a cheap way to impede automation.

Now the good news: The accessibility for EVE Online can be improved a lot by using additional software. We have explored reading the information directly from the game clients’ memory. Using this approach, we already get a lot of information. It is comprehensive enough to automate a wide range of in-game activities. This also includes running so-called security missions, where players need to navigate, fight rats, and interact with diverse objects in space.

Now about this specific challenge you mentioned in your post:

You can configure the overview window to show wrecks or hide wrecks. Have you already tried this? Do you know about overview presets and how to check what preset is currently active in the overview window? I might be getting the terminology wrong. There are probably some useful guides about how to work with the overview window, explaining these things.

When making a bot, we usually use the overview window to interact with objects in space. The usual approach is to configure one or more overview presets which cover all the bot needs for its task. An overview preset can be configured to filter out selected types of objects. This filtering reduces the number of overview rows we need to read to find the in-game object we are interested in. In simple cases like mining, a single overview preset can be sufficient. When we want to use different presets, the bot can switch between those.

1 Like

Hey,

Thanks so much for clarifying the situation. :slight_smile: I only hope that since I’m not intending to use bots for quite the same things as other folks who use automation, it will be a bit less likely to irritate the powers that be at CCP.

The specific case of this wreck is annoying because apparently wrecks were already filtered into the Overview window. I’m not sure why this one doesn’t show up, it’s at the very start of the game, a “COncord Research Battleship,” in the initial tutorial level.

Is there any way to explore the game memory via the command line, or otherwise in real-time? That might hhelp me get a better sense for how things operate and what might be possible.

THanks again for your help.

As an update, apparently it is available in the overview if you select everything, but that is of course in efficient for our purposes. I have to try tonight and see if I can create a tab for that. I was trying to figure out the UI for the development environment a bit, but it was slightly confusing. I don’t know how many of the issues I was having were related to the fact I was accessing it with a screen reader, however.

1 Like

Yes, we can find you a way for real-time exploration of the data in the game client. I guess the quickest way is to reuse some C-Sharp scripting library and integrate it with the functions from our EVE Online memory reading framework. Having the generic scripting environment would allow you to write and reuse your processing functions easily. So you could get as specific as you want to extract the data you are interested in, and also render the raw data into an easily readable text. Such a tool could be used in the Windows Command Prompt or Windows PowerShell app.

I probably can build such a tool next week. I will post in this thread when it is ready.

1 Like

Thank you! :slight_smile: I’m looking forward to seeing what you might be able to come up with, as dealing with the client reliably on my own is a little bit difficult. I did get someone to help me with the particular problem I was having earlier, but that of course doesn’t solve the other issues :slight_smile:

I found an easy way to do this. I compiled a guide on the setup and uploaded it to https://github.com/Arcitectus/Sanderling/blob/master/guide/explore-eve-online-interactive.md

1 Like

Thanks! :slight_smile: I can’t wait to give this a try.

Of course, figuring out what to do with it is another question, I need to teach myself some C#. I imagine it would be fairly trivial to send a lot of things to an external speech synthesizer application. I will also have to examine the way that some of the bots are coded, so I can continually read memory instead of having a single snapshot of it. I appreciate the help on this. May I keep this thread open for accessibility questions, or would you prefer I start others?

Hi,

I tried out the tool and it definitely seems as though it’s going to help me a great deal. Is it theoretically possible to execute mouse clicks and similar from the console with this?

I’ve been thinking that what I want is less a traditional bot and more a sort of alternate UI, a way to display the game data and send commands to specific bits of it without having to depend on OCR for everything.

The other thing I was wondering is how this interacts with the recent switch to the ELM language. I feel like I’m missing a couple steps in how I might apply this practically to interact in real-time.

Any ideas are gratefully appreciated.

Sending this data to speech synthesis will be easy. One way would be to integrate speech synthesis in the script setup. (https://stackoverflow.com/questions/56376581/how-can-i-use-speechsynthesizer-in-asp-net-core-2-0)
But I don’t know if integrated speech synthesis would be better than rendering the text in a way that is easy to pick up for other programs, like a HTML document.

Yes, we can expand the script setup to help with sending mouse clicks. I can look up how it works in the bots implementations and copy and adapt it for the script setup.


Great to learn this. Building an alternate UI is no problem. For example, we could easily have it render objects from the game client into an HTML document which is updated automatically one or two times per second. This HTML document could have hyperlinks or buttons to execute typical commands like mouse clicks on the represented objects.
For the sensing part, we could also add audible alerts to such an interface.

1 Like

This is very encouraging. I will begin looking into some of the implementations as well, to see if I can figure out how this all fits together.

It’s funny that you mention HTML, as the first thing that I thought it was some sort of HTML-based overview, since that’s already a table anyway. needless to say, I am intrigued by the possibilities.

I explored how we can build such an alternate UI for EVE Online. I posted the results so far at Sanderling/implement/alternate-ui at 9566709bc11fe139e813e39c3ef387e1ce30bf7c · Arcitectus/Sanderling · GitHub

The readme file in that directory walks you through the setup.

Quote of the beginning of the readme file is below:

Alternate UI for EVE Online

This is an alternate user interface for the EVE Online game client. It is based on HTML and javascript and is used in a web browser.

It gets information from the EVE Online client via memory reading.

There are two ways to get a memory reading into this interface:

  • Load from a live EVE Online client process. (TODO, Not implemented yet: This user interface offers you input elements to interact with input elements in the EVE Online client. Note: When you send an input to the EVE Online client this way, the tool will switch the input focus to the EVE Online window and bring to the foreground. In case you run this user interface on the same desktop as the EVE Online client: To avoid interference between web browser window and game client window, place them side-by-side, so that they don’t overlap.)

  • Load from a file: You can load memory readings in JSON format you have saved earlier. Since this memory reading does not correspond to a live process, we use this option only to explore the general structure of information found in the game client’s memory.

Also, another quote from the section I think is most interesting to you:

Reading from live process

When reading from a live process, the system needs to perform a setup steps, including the search for the root of the UI tree in the EVE Online client process. During the setup stage you will see diverse messages informing about the current step.

The memory reading setup should complete within 20 seconds.

If no EVE Online client is started, it displays following message:

Looks like there is no EVE Online client process started. I continue looking in case one is started…

As long as reading from live process is selected, the program tries once per seconds to get a new memory reading from the game client.

When setup is complete you see following message:

Successfully read from the memory of the live process.

Below is a button labeled:

Click here to download this memory measurement to a JSON file.

The memory reading file you can download here is useful for collaboration: In the ‘Reading from file’ section, people can load this file into the UI to see the same memory reading that you had on your system.

Under the save button, you get tools for closer examination of the memory reading:

Below is an interactive tree view to explore this memory reading. You can expand and collapse individual nodes.

1 Like

Hi,

I admit I’m a bit puzzled as to what’s going on here, if only because I’m not sure how I’m supposed to actually use this. A big problem is that the tree view is not very accessible with my screen reading technology, and while I’m able to expand and collapse sections it’s not as efficient as something like this implementation because it’s a bunch of “expand,” and “collapse,” buttons, and I don’t have any keyboard access to the tree itself.

Even when I did reach a final UI node, I wasn’t really sure what I was able to do with it. I’m wondering if the parts of the UI are exposed in another place, or if I’m missing functionality because I can’t access the tree view proper? I could see and read all the UI summary information but was at a loss for what to do from that point. I should note that if you use something like the HTML canvass element, that’s not great for accessibility, so some other structure is needful. I guess I was just confused as to how to actually read game text practically.

Thanks for all the work on this. Any info on how to proceed would be very much appreciated.

The current state only implements the part of displaying the game data. It shows the UI tree from the game client and presents the properties of the UI nodes in text form.

I had some doubt the UI would be well usable when rendering all the nodes in the UI tree into the HTML document all the time. I see there can be more than a thousand nodes in the tree, even in simple scenarios. And each of the nodes, in turn, can have many properties. So we have tens of thousands of properties in the UI tree when more objects are on the screen.

I thought this quantity might make for a confusing impression, so I introduced a way to better focus on what is of interest to you in a given moment: You can expand and collapse individual nodes of the UI tree. For a collapsed node, it only shows a small summary, not all properties. When you get the first memory reading, all nodes are displayed collapsed, so only the summary of the root node is shown. You can expand that, and then the children of the root node. This way, you can descend into the part you are interested in.

The other part of the UI, sending commands to specific bits, is not implemented yet.

For the interaction with the EVE Online client, we could add buttons on the UI tree nodes to send mouse clicks to the EVE Online client.

Then there are also text boxes in the EVE Online UI. I don’t remember a way to identify textboxes now, no idea at the moment how the text entering part could look like.

I will have a look at the keyboard access. I had a look at the page you linked. I see there is quite some text to process there, I will take a closer look when I have the time.

The sending of input to the EVE Online client is not implemented yet. So the way to progress in this case is adapting the code of the user interface. I can do that, so you don’t need to do anything there.

Good point. At the moment, it looks like this: The different kinds of information from the game client is displayed with the same kind of text. Some examples of the different kinds of information in there:

  • Coordinates and offsets, which are composed of numbers.
  • Colors, composed of numeric values of opacity, red, green and blue components.
  • Text that is displayed in game as text. This is distinguished by the other kinds by the name of the properties. The last time I checked, CCP uses different names there, usually, the name of the property contains the string ‘text’.

As long as we can distinguish the properties representing text that is also visible in the game client, we can do something special with this subset. For example, highlight it with some formatting, or hide all other properties.

We could also add a text search function as another way to find nodes containing a given text.

Thanks for clarifying that. :slight_smile:
I definitely understand the intent behind the tree now, and am glad i wasn’t missing anything.

IF I can help at all with web accessibility, that’s more or less my day job, so I’m more than happy to test or provide advice, though I’m unfamiliar with this particular framework.

Your point about rendering thousands of nodes is well-taken. Some kind of text search would be helpful, or at least a way to filter out some elements which are less than helpful for me, like colors.

I presume that the tree is limited to the elements which are currently on screen, for the most part? I saw a lot of unpopulated nodes for things like the intro screen, for instance.

I was wondering if it might be possible to do something like construct an HTML table from the current overview tab, which is presumably a little less overwhelming than the entire UI tree. Of course, I need a way to interact with the rest of the client eventually, but I’m still new myself so it’s a bit of a struggle to figure out what I could benefit from.

Thanks for all your help.

I am not familiar with how to optimize UI for accessibility, so that I will need feedback there. I know the part of mapping information into an HTML tree. I can see how the document renders to a 2-dimensional representation, but I don’t know which variants are better to use with screen readers.

I will add a text search.
Also, the whole design is optimized for easy customization, so you can customize the subset of elements to display.

Yes, the memory reading is limited to what is on the screen. For example, the overview window: Similar to windows in the Windows OS UI, it has a minimize button. When you minimize it in the game client, it also disappears in the memory reading. Maybe a single node remains, but none of the children which contain all the information of interest when playing.

This visibility limit is also important to consider for another reason: The overview windows viewport has limited space to display rows representing the objects in space. When more objects are matching the current preset than fit into the viewport, some are invisible. The in-game UI offers scrollbars for mapping the larger virtual space into the viewport. The alternate UI probably works better when you make the overview window as tall as possible. In case you need to filter the overview rows to a subset, it is not necessary to use the overview preset in the game client. Bots already implement filters for overview rows, you could reuse them for the alternate UI. Some bots also read into the part of the overviews subtree representing the scrollbars. Based on this reading, they check if there are scrollbars and if the viewport is scrolled to where we want it.

Sure, I can add a section dedicated to the overview window. It could also represent the overview entries without offering interactions to expand or collapse parts of the subtree.

1 Like

This sounds like a great place to begin. I appreciate the info about the viewport, it makes sense, but is a little hard for me to verify for myself.

I imagine another benefit to the alternate UI is that the window layout is less critical. For the moment my problem is that OCR doesn’t understand that the client is divided into multiple windows, and it tends to lump things together in weird ways.

I look forward to seeing whatever you can come up with, and will help as much as I can.

If possible, the general accessibility wisdom is to use standard HTML elements where ever possible, and avoid coding custom widgets because they are more complicated and error-prone. We’re not doing a formal audit or anything like that here, of course, but I think it still applies. :slight_smile:

I expanded the alternate UI to display the overview window as a standard HTML table.

To use this new version, you only have to replace the contents elm-app directory with the latest version from the repository at https://github.com/Arcitectus/Sanderling/tree/master/implement/alternate-ui

Starting the alternate UI works as before. The new overview table is displayed under the tree view.

1 Like

Thank you! :slight_smile: I’m looking forward to trying this as soon as I get home later today.

I just looked at this, and it’s exactly the sort of thing I wanted. Thank you very much!

I did have a question. I was wondering how well this would work with the Eve client running full screen? At the moment I have it that way because OCR seems to react a little better to it, but if I can eventually switch to mostly using an alternate UI, that might not be necessary. In your guide you mention that the alternate UI needs to switch over to the client to execute mouse clicks and such. Iw as wondering how well this would interact with full-screen mode.