Background:
I am building an LLM-powered bot on Sanderling + Python + LangChain (~40% complete).
Current architecture:
- Read — Sanderling for game state capture
- Process — LangChain orchestrator for decision-making
- Write — Input injection (not yet implemented)
Detection Methods I’m Already Aware Of:
- Player reports — GM review triggered by human reports
- Behavioural analysis — Flagging inhuman patterns (e.g. mining 24hrs non-stop, perfect reaction times, never responding in Local)
To counter #2, I’m using a hybrid approach: a state machine handles routine tasks(mining, FW PLEX), while the LLM handles edge cases — responding to Local chat, reacting to threats, scheduling breaks. The goal is behaviour that passes a casual human inspection. (THE TURING TEST)
My Question:
As I implement the input layer:
- Does EVE detect high-level input emulation (SendMessage, UI Automation, etc.)?
- Or is detection purely behavioural?
- Is low-level input simulation (kernel-level mouse/keyboard) necessary?
- Worst case I have some STM32s around that can be easily turn into a physical HID.
Any experience with the client-side detection would be appreciated.
A Note on Source Code:
I won’t be sharing any source code for obvious reasons — but I’m happy to discuss architecture and implementation details with experienced developers. If you’re not familiar with finite state machines and event-driven design, this probably isn’t the for you.