OSWorld/mm_agents
XinyuanWangCS bfddbcff27 Merge branch 'main' into uitars/dev 2025-07-30 14:03:43 +00:00
..
accessibility_tree_wrap Modify the namespace of a11y tree (#62) 2024-07-25 20:20:34 +08:00
anthropic update claude (#280) 2025-07-23 03:35:49 +08:00
gui_som FIx corner cases (val connection in chrome when using playwright, and action parsing for agent, and accessibility tree xml handling) 2024-01-16 22:00:01 +08:00
llm_server/CogAgent add cogagent server 2024-03-18 00:22:57 +08:00
utils support_qwen25vl (#276) 2025-07-22 16:33:03 +08:00
README.md Add Llama3-70B Support (from Groq) 2024-05-09 02:04:58 +08:00
__init__.py Add DuckTrack as initial annotation tool; Initial multimodal test 2023-11-27 00:34:57 +08:00
agent.py feat: add client password argument to multiple agents and scripts 2025-07-27 16:11:23 +00:00
aguvis_agent.py feat: add client password argument to multiple agents and scripts 2025-07-27 16:11:23 +00:00
gta1_agent.py feat: add client password argument to multiple agents and scripts 2025-07-27 16:11:23 +00:00
jedi_3b_agent.py feat: update jedi agent with support for o3 as planner 2025-07-30 14:06:37 +08:00
jedi_7b_agent.py feat: update jedi agent with support for o3 as planner 2025-07-30 14:06:37 +08:00
o3_agent.py feat: add run_multienv_o3.py script for multi-environment evaluation 2025-07-27 16:47:24 +00:00
openai_cua_agent.py feat: add client password argument to multiple agents and scripts 2025-07-27 16:11:23 +00:00
opencua_agent.py Wxy/opencua (#274) 2025-07-20 15:52:23 +08:00
prompts.py feat: add client password argument to multiple agents and scripts 2025-07-27 16:11:23 +00:00
qwen25vl_agent.py feat: refactor run_multienv_qwen25vl.py and qwen25vl_agent.py for improved logging and task management 2025-07-22 19:46:42 +00:00
uitars15_v1.py add all the uitars agents: 2025-07-27 05:42:57 +00:00
uitars15_v2.py add all the uitars agents: 2025-07-27 05:42:57 +00:00
uitars_agent.py add all the uitars agents: 2025-07-27 05:42:57 +00:00

README.md

Agent

Prompt-based Agents

Supported Models

We currently support the following models as the foundational models for the agents:

  • GPT-3.5 (gpt-3.5-turbo-16k, ...)
  • GPT-4 (gpt-4-0125-preview, gpt-4-1106-preview, ...)
  • GPT-4V (gpt-4-vision-preview, ...)
  • Gemini-Pro
  • Gemini-Pro-Vision
  • Claude-3, 2 (claude-3-haiku-2024030, claude-3-sonnet-2024022, ...)
  • ...

And those from the open-source community:

  • Mixtral 8x7B
  • QWEN, QWEN-VL
  • CogAgent
  • Llama3
  • ...

In the future, we will integrate and support more foundational models to enhance digital agents, so stay tuned.

How to use

from mm_agents.agent import PromptAgent

agent = PromptAgent(
    model="gpt-4-vision-preview",
    observation_type="screenshot",
)
agent.reset()
# say we have an instruction and observation
instruction = "Please help me to find the nearest restaurant."
obs = {"screenshot": open("path/to/observation.jpg", 'rb').read()}
response, actions = agent.predict(
    instruction,
    obs
)

Observation Space and Action Space

We currently support the following observation spaces:

  • a11y_tree: the accessibility tree of the current screen
  • screenshot: a screenshot of the current screen
  • screenshot_a11y_tree: a screenshot of the current screen with the accessibility tree overlay
  • som: the set-of-mark trick on the current screen, with table metadata included.

And the following action spaces:

  • pyautogui: valid Python code with pyautogui code valid
  • computer_13: a set of enumerated actions designed by us

To feed an observation into the agent, you have to maintain the obs variable as a dict with the corresponding information:

# continue from the previous code snippet
obs = {
    "screenshot": open("path/to/observation.jpg", 'rb').read(),
    "a11y_tree": ""  # [a11y_tree data]
}
response, actions = agent.predict(
    instruction,
    obs
)

Efficient Agents, Q* Agents, and more

Stay tuned for more updates.