burtenshaw HF Staff commited on
Commit
73edc95
·
verified ·
1 Parent(s): bc97e1c

Upload folder using huggingface_hub

Browse files
Dockerfile ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ # Multi-stage build using openenv-base
8
+ # This Dockerfile is flexible and works for both:
9
+ # - In-repo environments (with local OpenEnv sources)
10
+ # - Standalone environments (with openenv from PyPI/Git)
11
+ # The build script (openenv build) handles context detection and sets appropriate build args.
12
+
13
+ ARG BASE_IMAGE=ghcr.io/meta-pytorch/openenv-base:latest
14
+ FROM ${BASE_IMAGE} AS builder
15
+
16
+ WORKDIR /app
17
+
18
+ # Ensure git is available (required for installing dependencies from VCS)
19
+ RUN apt-get update && \
20
+ apt-get install -y --no-install-recommends git && \
21
+ rm -rf /var/lib/apt/lists/*
22
+
23
+ # Build argument to control whether we're building standalone or in-repo
24
+ ARG BUILD_MODE=in-repo
25
+ ARG ENV_NAME=benchmark
26
+
27
+ # Copy environment code (always at root of build context)
28
+ COPY . /app/env
29
+
30
+ # For in-repo builds, openenv is already vendored in the build context
31
+ # For standalone builds, openenv will be installed via pyproject.toml
32
+ WORKDIR /app/env
33
+
34
+ # Ensure uv is available (for local builds where base image lacks it)
35
+ RUN if ! command -v uv >/dev/null 2>&1; then \
36
+ curl -LsSf https://astral.sh/uv/install.sh | sh && \
37
+ mv /root/.local/bin/uv /usr/local/bin/uv && \
38
+ mv /root/.local/bin/uvx /usr/local/bin/uvx; \
39
+ fi
40
+
41
+ # Install dependencies using uv sync
42
+ # If uv.lock exists, use it; otherwise resolve on the fly
43
+ RUN --mount=type=cache,target=/root/.cache/uv \
44
+ if [ -f uv.lock ]; then \
45
+ uv sync --frozen --no-install-project --no-editable; \
46
+ else \
47
+ uv sync --no-install-project --no-editable; \
48
+ fi
49
+
50
+ RUN --mount=type=cache,target=/root/.cache/uv \
51
+ if [ -f uv.lock ]; then \
52
+ uv sync --frozen --no-editable; \
53
+ else \
54
+ uv sync --no-editable; \
55
+ fi
56
+
57
+ # Final runtime stage
58
+ FROM ${BASE_IMAGE}
59
+
60
+ WORKDIR /app
61
+
62
+ # Copy the virtual environment from builder
63
+ COPY --from=builder /app/env/.venv /app/.venv
64
+
65
+ # Copy the environment code
66
+ COPY --from=builder /app/env /app/env
67
+
68
+ # Set PATH to use the virtual environment
69
+ ENV PATH="/app/.venv/bin:$PATH"
70
+
71
+ # Set PYTHONPATH so imports work correctly
72
+ ENV PYTHONPATH="/app/env:$PYTHONPATH"
73
+
74
+ # Health check
75
+ HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
76
+ CMD curl -f http://localhost:8000/health || exit 1
77
+
78
+ # Run the FastAPI server
79
+ # The module path is constructed to work with the /app/env structure
80
+ ENV ENABLE_WEB_INTERFACE=true
81
+ CMD ["sh", "-c", "cd /app/env && uvicorn server.app:app --host 0.0.0.0 --port 8000"]
README.md CHANGED
@@ -1,10 +1,255 @@
1
  ---
2
- title: Openenv Benchmark Ws
3
- emoji: 🐢
4
- colorFrom: purple
5
- colorTo: indigo
6
  sdk: docker
7
  pinned: false
 
 
 
 
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Benchmark Environment Server
3
+ emoji: 🎸
4
+ colorFrom: red
5
+ colorTo: red
6
  sdk: docker
7
  pinned: false
8
+ app_port: 8000
9
+ base_path: /web
10
+ tags:
11
+ - openenv
12
  ---
13
 
14
+ # Benchmark Environment
15
+
16
+ A simple test environment that echoes back messages. Perfect for testing the env APIs as well as demonstrating environment usage patterns.
17
+
18
+ ## Quick Start
19
+
20
+ The simplest way to use the Benchmark environment is through the `BenchmarkEnv` class:
21
+
22
+ ```python
23
+ from benchmark import BenchmarkAction, BenchmarkEnv
24
+
25
+ try:
26
+ # Create environment from Docker image
27
+ benchmarkenv = BenchmarkEnv.from_docker_image("benchmark-env:latest")
28
+
29
+ # Reset
30
+ result = benchmarkenv.reset()
31
+ print(f"Reset: {result.observation.echoed_message}")
32
+
33
+ # Send multiple messages
34
+ messages = ["Hello, World!", "Testing echo", "Final message"]
35
+
36
+ for msg in messages:
37
+ result = benchmarkenv.step(BenchmarkAction(message=msg))
38
+ print(f"Sent: '{msg}'")
39
+ print(f" → Echoed: '{result.observation.echoed_message}'")
40
+ print(f" → Length: {result.observation.message_length}")
41
+ print(f" → Reward: {result.reward}")
42
+
43
+ finally:
44
+ # Always clean up
45
+ benchmarkenv.close()
46
+ ```
47
+
48
+ That's it! The `BenchmarkEnv.from_docker_image()` method handles:
49
+ - Starting the Docker container
50
+ - Waiting for the server to be ready
51
+ - Connecting to the environment
52
+ - Container cleanup when you call `close()`
53
+
54
+ ## Building the Docker Image
55
+
56
+ Before using the environment, you need to build the Docker image:
57
+
58
+ ```bash
59
+ # From project root
60
+ docker build -t benchmark-env:latest -f server/Dockerfile .
61
+ ```
62
+
63
+ ## Deploying to Hugging Face Spaces
64
+
65
+ You can easily deploy your OpenEnv environment to Hugging Face Spaces using the `openenv push` command:
66
+
67
+ ```bash
68
+ # From the environment directory (where openenv.yaml is located)
69
+ openenv push
70
+
71
+ # Or specify options
72
+ openenv push --namespace my-org --private
73
+ ```
74
+
75
+ The `openenv push` command will:
76
+ 1. Validate that the directory is an OpenEnv environment (checks for `openenv.yaml`)
77
+ 2. Prepare a custom build for Hugging Face Docker space (enables web interface)
78
+ 3. Upload to Hugging Face (ensuring you're logged in)
79
+
80
+ ### Prerequisites
81
+
82
+ - Authenticate with Hugging Face: The command will prompt for login if not already authenticated
83
+
84
+ ### Options
85
+
86
+ - `--directory`, `-d`: Directory containing the OpenEnv environment (defaults to current directory)
87
+ - `--repo-id`, `-r`: Repository ID in format 'username/repo-name' (defaults to 'username/env-name' from openenv.yaml)
88
+ - `--base-image`, `-b`: Base Docker image to use (overrides Dockerfile FROM)
89
+ - `--private`: Deploy the space as private (default: public)
90
+
91
+ ### Examples
92
+
93
+ ```bash
94
+ # Push to your personal namespace (defaults to username/env-name from openenv.yaml)
95
+ openenv push
96
+
97
+ # Push to a specific repository
98
+ openenv push --repo-id my-org/my-env
99
+
100
+ # Push with a custom base image
101
+ openenv push --base-image ghcr.io/meta-pytorch/openenv-base:latest
102
+
103
+ # Push as a private space
104
+ openenv push --private
105
+
106
+ # Combine options
107
+ openenv push --repo-id my-org/my-env --base-image custom-base:latest --private
108
+ ```
109
+
110
+ After deployment, your space will be available at:
111
+ `https://huggingface.co/spaces/<repo-id>`
112
+
113
+ The deployed space includes:
114
+ - **Web Interface** at `/web` - Interactive UI for exploring the environment
115
+ - **API Documentation** at `/docs` - Full OpenAPI/Swagger interface
116
+ - **Health Check** at `/health` - Container health monitoring
117
+ - **WebSocket** at `/ws` - Persistent session endpoint for low-latency interactions
118
+
119
+ ## Environment Details
120
+
121
+ ### Action
122
+ **BenchmarkAction**: Contains a single field
123
+ - `message` (str) - The message to echo back
124
+
125
+ ### Observation
126
+ **BenchmarkObservation**: Contains the echo response and metadata
127
+ - `echoed_message` (str) - The message echoed back
128
+ - `message_length` (int) - Length of the message
129
+ - `reward` (float) - Reward based on message length (length × 0.1)
130
+ - `done` (bool) - Always False for echo environment
131
+ - `metadata` (dict) - Additional info like step count
132
+
133
+ ### Reward
134
+ The reward is calculated as: `message_length × 0.1`
135
+ - "Hi" → reward: 0.2
136
+ - "Hello, World!" → reward: 1.3
137
+ - Empty message → reward: 0.0
138
+
139
+ ## Advanced Usage
140
+
141
+ ### Connecting to an Existing Server
142
+
143
+ If you already have a Benchmark environment server running, you can connect directly:
144
+
145
+ ```python
146
+ from benchmark import BenchmarkEnv
147
+
148
+ # Connect to existing server
149
+ benchmarkenv = BenchmarkEnv(base_url="<ENV_HTTP_URL_HERE>")
150
+
151
+ # Use as normal
152
+ result = benchmarkenv.reset()
153
+ result = benchmarkenv.step(BenchmarkAction(message="Hello!"))
154
+ ```
155
+
156
+ Note: When connecting to an existing server, `benchmarkenv.close()` will NOT stop the server.
157
+
158
+ ### WebSocket Client for Persistent Sessions
159
+
160
+ For long-running episodes or when you need lower latency, use the WebSocket client:
161
+
162
+ ```python
163
+ from benchmark import BenchmarkAction, BenchmarkEnvWS
164
+
165
+ # Connect via WebSocket (maintains persistent connection)
166
+ with BenchmarkEnvWS(base_url="http://localhost:8000") as env:
167
+ result = env.reset()
168
+ print(f"Reset: {result.observation.echoed_message}")
169
+ # Multiple steps with low latency
170
+ for msg in ["Hello", "World", "!"]:
171
+ result = env.step(BenchmarkAction(message=msg))
172
+ print(f"Echoed: {result.observation.echoed_message}")
173
+ ```
174
+
175
+ WebSocket advantages:
176
+ - **Lower latency**: No HTTP connection overhead per request
177
+ - **Persistent session**: Server maintains your environment state
178
+ - **Efficient for episodes**: Better for many sequential steps
179
+
180
+ ### Concurrent WebSocket Sessions
181
+
182
+ The server supports multiple concurrent WebSocket connections. To enable this,
183
+ modify `server/app.py` to use factory mode:
184
+
185
+ ```python
186
+ # In server/app.py - use factory mode for concurrent sessions
187
+ app = create_app(
188
+ BenchmarkEnvironment, # Pass class, not instance
189
+ BenchmarkAction,
190
+ BenchmarkObservation,
191
+ max_concurrent_envs=4, # Allow 4 concurrent sessions
192
+ )
193
+ ```
194
+
195
+ Then multiple clients can connect simultaneously:
196
+
197
+ ```python
198
+ from benchmark import BenchmarkAction, BenchmarkEnvWS
199
+ from concurrent.futures import ThreadPoolExecutor
200
+
201
+ def run_episode(client_id: int):
202
+ with BenchmarkEnvWS(base_url="http://localhost:8000") as env:
203
+ result = env.reset()
204
+ for i in range(10):
205
+ result = env.step(BenchmarkAction(message=f"Client {client_id}, step {i}"))
206
+ return client_id, result.observation.message_length
207
+
208
+ # Run 4 episodes concurrently
209
+ with ThreadPoolExecutor(max_workers=4) as executor:
210
+ results = list(executor.map(run_episode, range(4)))
211
+ ```
212
+
213
+ ## Development & Testing
214
+
215
+ ### Direct Environment Testing
216
+
217
+ Test the environment logic directly without starting the HTTP server:
218
+
219
+ ```bash
220
+ # From the server directory
221
+ python3 server/benchmark_environment.py
222
+ ```
223
+
224
+ This verifies that:
225
+ - Environment resets correctly
226
+ - Step executes actions properly
227
+ - State tracking works
228
+ - Rewards are calculated correctly
229
+
230
+ ### Running Locally
231
+
232
+ Run the server locally for development:
233
+
234
+ ```bash
235
+ uvicorn server.app:app --reload
236
+ ```
237
+
238
+ ## Project Structure
239
+
240
+ ```
241
+ benchmark/
242
+ ├── .dockerignore # Docker build exclusions
243
+ ├── __init__.py # Module exports
244
+ ├── README.md # This file
245
+ ├── openenv.yaml # OpenEnv manifest
246
+ ├── pyproject.toml # Project metadata and dependencies
247
+ ├── uv.lock # Locked dependencies (generated)
248
+ ├── client.py # BenchmarkEnv (HTTP) and BenchmarkEnvWS (WebSocket) clients
249
+ ├── models.py # Action and Observation models
250
+ └── server/
251
+ ├── __init__.py # Server module exports
252
+ ├── benchmark_environment.py # Core environment logic
253
+ ├── app.py # FastAPI application (HTTP + WebSocket endpoints)
254
+ └── Dockerfile # Container image definition
255
+ ```
__init__.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Benchmark Environment - Test environment for concurrency and infrastructure."""
8
+
9
+ from .client import BenchmarkEnv, BenchmarkEnvWS
10
+ from .models import BenchmarkAction, BenchmarkObservation
11
+
12
+ __all__ = [
13
+ "BenchmarkAction",
14
+ "BenchmarkObservation",
15
+ "BenchmarkEnv",
16
+ "BenchmarkEnvWS",
17
+ ]
build/lib/benchmark/__init__.py ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Benchmark Environment - A simple test environment for HTTP server."""
8
+
9
+ from .client import BenchmarkEnv, BenchmarkEnvWS
10
+ from .models import BenchmarkAction, BenchmarkObservation
11
+
12
+ __all__ = [
13
+ "BenchmarkAction",
14
+ "BenchmarkObservation",
15
+ "BenchmarkEnv",
16
+ "BenchmarkEnvWS",
17
+ ]
build/lib/benchmark/client.py ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ Benchmark Environment Clients.
9
+
10
+ This module provides clients for connecting to a Benchmark Environment server:
11
+ - BenchmarkEnv: HTTP client for request/response interactions
12
+ - BenchmarkEnvWS: WebSocket client for persistent sessions
13
+ """
14
+
15
+ from typing import Any, Dict
16
+
17
+ from openenv.core.client_types import StepResult
18
+ from openenv.core.env_server.types import State
19
+ from openenv.core.http_env_client import HTTPEnvClient
20
+ from openenv.core.ws_env_client import WebSocketEnvClient
21
+
22
+ from .models import BenchmarkAction, BenchmarkObservation
23
+
24
+
25
+ class BenchmarkEnv(HTTPEnvClient[BenchmarkAction, BenchmarkObservation]):
26
+ """
27
+ HTTP client for the Benchmark Environment.
28
+
29
+ This client connects to a BenchmarkEnvironment HTTP server and provides
30
+ methods to interact with it: reset(), step(), and state access.
31
+
32
+ Example:
33
+ >>> # Connect to a running server
34
+ >>> client = BenchmarkEnv(base_url="http://localhost:8000")
35
+ >>> result = client.reset()
36
+ >>> print(result.observation.echoed_message)
37
+ >>>
38
+ >>> # Send a message
39
+ >>> result = client.step(BenchmarkAction(message="Hello!"))
40
+ >>> print(result.observation.echoed_message)
41
+ >>> print(result.reward)
42
+
43
+ Example with Docker:
44
+ >>> # Automatically start container and connect
45
+ >>> client = BenchmarkEnv.from_docker_image("benchmark-env:latest")
46
+ >>> result = client.reset()
47
+ >>> result = client.step(BenchmarkAction(message="Test"))
48
+ """
49
+
50
+ def _step_payload(self, action: BenchmarkAction) -> Dict:
51
+ """
52
+ Convert BenchmarkAction to JSON payload for step request.
53
+
54
+ Args:
55
+ action: BenchmarkAction instance
56
+
57
+ Returns:
58
+ Dictionary representation suitable for JSON encoding
59
+ """
60
+ return {
61
+ "message": action.message,
62
+ }
63
+
64
+ def _parse_result(self, payload: Dict) -> StepResult[BenchmarkObservation]:
65
+ """
66
+ Parse server response into StepResult[BenchmarkObservation].
67
+
68
+ Args:
69
+ payload: JSON response from server
70
+
71
+ Returns:
72
+ StepResult with BenchmarkObservation
73
+ """
74
+ obs_data = payload.get("observation", {})
75
+ observation = BenchmarkObservation(
76
+ echoed_message=obs_data.get("echoed_message", ""),
77
+ message_length=obs_data.get("message_length", 0),
78
+ done=payload.get("done", False),
79
+ reward=payload.get("reward"),
80
+ metadata=obs_data.get("metadata", {}),
81
+ )
82
+
83
+ return StepResult(
84
+ observation=observation,
85
+ reward=payload.get("reward"),
86
+ done=payload.get("done", False),
87
+ )
88
+
89
+ def _parse_state(self, payload: Dict) -> State:
90
+ """
91
+ Parse server response into State object.
92
+
93
+ Args:
94
+ payload: JSON response from /state endpoint
95
+
96
+ Returns:
97
+ State object with episode_id and step_count
98
+ """
99
+ return State(
100
+ episode_id=payload.get("episode_id"),
101
+ step_count=payload.get("step_count", 0),
102
+ )
103
+
104
+
105
+ class BenchmarkEnvWS(WebSocketEnvClient[BenchmarkAction, BenchmarkObservation]):
106
+ """
107
+ WebSocket client for the Benchmark Environment.
108
+
109
+ This client maintains a persistent WebSocket connection to the environment server,
110
+ enabling efficient multi-step interactions with lower latency than HTTP.
111
+ Each client instance has its own dedicated environment session on the server.
112
+
113
+ Advantages over HTTP client:
114
+ - Lower latency for sequential interactions (no connection overhead per request)
115
+ - Session state is maintained server-side
116
+ - Better suited for long-running episodes
117
+
118
+ Example:
119
+ >>> # Connect to a running server via WebSocket
120
+ >>> with BenchmarkEnvWS(base_url="http://localhost:8000") as client:
121
+ ... result = client.reset()
122
+ ... print(result.observation.echoed_message)
123
+ ...
124
+ ... result = client.step(BenchmarkAction(message="Hello!"))
125
+ ... print(result.observation.echoed_message)
126
+
127
+ Example with Docker:
128
+ >>> # Automatically start container and connect via WebSocket
129
+ >>> client = BenchmarkEnvWS.from_docker_image("benchmark-env:latest")
130
+ >>> try:
131
+ ... result = client.reset()
132
+ ... result = client.step(BenchmarkAction(message="Test"))
133
+ ... finally:
134
+ ... client.close()
135
+ """
136
+
137
+ def _step_payload(self, action: BenchmarkAction) -> Dict:
138
+ """
139
+ Convert BenchmarkAction to JSON payload for step message.
140
+
141
+ Args:
142
+ action: BenchmarkAction instance
143
+
144
+ Returns:
145
+ Dictionary representation suitable for JSON encoding
146
+ """
147
+ return {
148
+ "message": action.message,
149
+ }
150
+
151
+ def _parse_result(self, payload: Dict) -> StepResult[BenchmarkObservation]:
152
+ """
153
+ Parse WebSocket response into StepResult[BenchmarkObservation].
154
+
155
+ Args:
156
+ payload: JSON response data from server
157
+
158
+ Returns:
159
+ StepResult with BenchmarkObservation
160
+ """
161
+ obs_data = payload.get("observation", {})
162
+ observation = BenchmarkObservation(
163
+ echoed_message=obs_data.get("echoed_message", ""),
164
+ message_length=obs_data.get("message_length", 0),
165
+ done=payload.get("done", False),
166
+ reward=payload.get("reward"),
167
+ metadata=obs_data.get("metadata", {}),
168
+ )
169
+
170
+ return StepResult(
171
+ observation=observation,
172
+ reward=payload.get("reward"),
173
+ done=payload.get("done", False),
174
+ )
175
+
176
+ def _parse_state(self, payload: Dict) -> State:
177
+ """
178
+ Parse WebSocket state response into State object.
179
+
180
+ Args:
181
+ payload: JSON response from state request
182
+
183
+ Returns:
184
+ State object with episode_id and step_count
185
+ """
186
+ return State(
187
+ episode_id=payload.get("episode_id"),
188
+ step_count=payload.get("step_count", 0),
189
+ )
build/lib/benchmark/models.py ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ Data models for the Benchmark Environment.
9
+
10
+ The benchmark environment is a simple test environment that echoes back messages.
11
+ """
12
+
13
+ from dataclasses import dataclass
14
+
15
+ from openenv.core.env_server.types import Action, Observation
16
+
17
+
18
+ @dataclass(kw_only=True)
19
+ class BenchmarkAction(Action):
20
+ """Action for the Benchmark environment - just a message to echo."""
21
+
22
+ message: str
23
+
24
+
25
+ @dataclass(kw_only=True)
26
+ class BenchmarkObservation(Observation):
27
+ """Observation from the Benchmark environment - the echoed message."""
28
+
29
+ echoed_message: str
30
+ message_length: int = 0
31
+
build/lib/benchmark/server/__init__.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Benchmark environment server components."""
8
+
9
+ from .benchmark_environment import BenchmarkEnvironment
10
+
11
+ __all__ = ["BenchmarkEnvironment"]
12
+
build/lib/benchmark/server/app.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ FastAPI application for the Benchmark Environment.
9
+
10
+ This module creates an HTTP server that exposes the BenchmarkEnvironment
11
+ over HTTP and WebSocket endpoints, compatible with HTTPEnvClient and WebSocketEnvClient.
12
+
13
+ Endpoints:
14
+ - POST /reset: Reset the environment
15
+ - POST /step: Execute an action
16
+ - GET /state: Get current environment state
17
+ - GET /schema: Get action/observation schemas
18
+ - WS /ws: WebSocket endpoint for persistent sessions
19
+
20
+ Usage:
21
+ # Development (with auto-reload):
22
+ uvicorn server.app:app --reload --host 0.0.0.0 --port 8000
23
+
24
+ # Production:
25
+ uvicorn server.app:app --host 0.0.0.0 --port 8000 --workers 4
26
+
27
+ # Or run directly:
28
+ python -m server.app
29
+ """
30
+
31
+ try:
32
+ from openenv.core.env_server.http_server import create_app
33
+ except Exception as e: # pragma: no cover
34
+ raise ImportError(
35
+ "openenv is required for the web interface. Install dependencies with '\n uv sync\n'"
36
+ ) from e
37
+
38
+ from benchmark.models import BenchmarkAction, BenchmarkObservation
39
+ from .benchmark_environment import BenchmarkEnvironment
40
+
41
+
42
+ # Create the app with web interface and README integration
43
+ app = create_app(
44
+ BenchmarkEnvironment,
45
+ BenchmarkAction,
46
+ BenchmarkObservation,
47
+ env_name="benchmark",
48
+ max_concurrent_envs=1, # increase this number to allow more concurrent WebSocket sessions
49
+ )
50
+
51
+
52
+ def main(host: str = "0.0.0.0", port: int = 8000):
53
+ """
54
+ Entry point for direct execution via uv run or python -m.
55
+
56
+ This function enables running the server without Docker:
57
+ uv run --project . server
58
+ uv run --project . server --port 8001
59
+ python -m benchmark.server.app
60
+
61
+ Args:
62
+ host: Host address to bind to (default: "0.0.0.0")
63
+ port: Port number to listen on (default: 8000)
64
+
65
+ For production deployments, consider using uvicorn directly with
66
+ multiple workers:
67
+ uvicorn benchmark.server.app:app --workers 4
68
+ """
69
+ import uvicorn
70
+
71
+ uvicorn.run(app, host=host, port=port)
72
+
73
+
74
+ if __name__ == "__main__":
75
+ import argparse
76
+
77
+ parser = argparse.ArgumentParser()
78
+ parser.add_argument("--port", type=int, default=8000)
79
+ args = parser.parse_args()
80
+ main(port=args.port)
build/lib/benchmark/server/benchmark_environment.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ Benchmark Environment Implementation.
9
+
10
+ A simple test environment that echoes back messages sent to it.
11
+ Perfect for testing HTTP server infrastructure.
12
+ """
13
+
14
+ from uuid import uuid4
15
+
16
+ from openenv.core.env_server.interfaces import Environment
17
+ from openenv.core.env_server.types import State
18
+
19
+ from models import BenchmarkAction, BenchmarkObservation
20
+
21
+
22
+ class BenchmarkEnvironment(Environment):
23
+ """
24
+ A simple echo environment that echoes back messages.
25
+
26
+ This environment is designed for testing the HTTP server infrastructure.
27
+ It maintains minimal state and simply echoes back whatever message it receives.
28
+
29
+ Example:
30
+ >>> env = BenchmarkEnvironment()
31
+ >>> obs = env.reset()
32
+ >>> print(obs.echoed_message) # "Benchmark environment ready!"
33
+ >>>
34
+ >>> obs = env.step(BenchmarkAction(message="Hello"))
35
+ >>> print(obs.echoed_message) # "Hello"
36
+ >>> print(obs.message_length) # 5
37
+ """
38
+
39
+ # Enable concurrent WebSocket sessions.
40
+ # Set to True if your environment isolates state between instances.
41
+ # When True, multiple WebSocket clients can connect simultaneously, each
42
+ # getting their own environment instance (when using factory mode in app.py).
43
+ CONCURRENCY_SAFE: bool = True
44
+
45
+ def __init__(self):
46
+ """Initialize the benchmark environment."""
47
+ self._state = State(episode_id=str(uuid4()), step_count=0)
48
+ self._reset_count = 0
49
+
50
+ def reset(self) -> BenchmarkObservation:
51
+ """
52
+ Reset the environment.
53
+
54
+ Returns:
55
+ BenchmarkObservation with a ready message
56
+ """
57
+ self._state = State(episode_id=str(uuid4()), step_count=0)
58
+ self._reset_count += 1
59
+
60
+ return BenchmarkObservation(
61
+ echoed_message="Benchmark environment ready!",
62
+ message_length=0,
63
+ done=False,
64
+ reward=0.0,
65
+ )
66
+
67
+ def step(self, action: BenchmarkAction) -> BenchmarkObservation: # type: ignore[override]
68
+ """
69
+ Execute a step in the environment by echoing the message.
70
+
71
+ Args:
72
+ action: BenchmarkAction containing the message to echo
73
+
74
+ Returns:
75
+ BenchmarkObservation with the echoed message and its length
76
+ """
77
+ self._state.step_count += 1
78
+
79
+ message = action.message
80
+ length = len(message)
81
+
82
+ # Simple reward: longer messages get higher rewards
83
+ reward = length * 0.1
84
+
85
+ return BenchmarkObservation(
86
+ echoed_message=message,
87
+ message_length=length,
88
+ done=False,
89
+ reward=reward,
90
+ metadata={"original_message": message, "step": self._state.step_count},
91
+ )
92
+
93
+ @property
94
+ def state(self) -> State:
95
+ """
96
+ Get the current environment state.
97
+
98
+ Returns:
99
+ Current State with episode_id and step_count
100
+ """
101
+ return self._state
client.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ Benchmark Environment Clients.
9
+
10
+ Provides HTTP and WebSocket clients for the Benchmark Environment.
11
+ """
12
+
13
+ from typing import Dict
14
+
15
+ from openenv.core.client_types import StepResult
16
+ from openenv.core.env_server.types import State
17
+ from openenv.core.http_env_client import HTTPEnvClient
18
+ from openenv.core.ws_env_client import WebSocketEnvClient
19
+
20
+ from .models import BenchmarkAction, BenchmarkObservation
21
+
22
+
23
+ class BenchmarkEnv(HTTPEnvClient[BenchmarkAction, BenchmarkObservation]):
24
+ """HTTP client for the Benchmark Environment."""
25
+
26
+ def _step_payload(self, action: BenchmarkAction) -> Dict:
27
+ return {"wait_seconds": action.wait_seconds}
28
+
29
+ def _parse_result(self, payload: Dict) -> StepResult[BenchmarkObservation]:
30
+ obs_data = payload.get("observation", {})
31
+ observation = BenchmarkObservation(
32
+ host_url=obs_data.get("host_url", ""),
33
+ pid=obs_data.get("pid", 0),
34
+ session_hash=obs_data.get("session_hash", ""),
35
+ waited_seconds=obs_data.get("waited_seconds", 0.0),
36
+ timestamp=obs_data.get("timestamp", 0.0),
37
+ done=payload.get("done", False),
38
+ reward=payload.get("reward"),
39
+ metadata=obs_data.get("metadata", {}),
40
+ )
41
+ return StepResult(
42
+ observation=observation,
43
+ reward=payload.get("reward"),
44
+ done=payload.get("done", False),
45
+ )
46
+
47
+ def _parse_state(self, payload: Dict) -> State:
48
+ return State(
49
+ episode_id=payload.get("episode_id"),
50
+ step_count=payload.get("step_count", 0),
51
+ )
52
+
53
+
54
+ class BenchmarkEnvWS(WebSocketEnvClient[BenchmarkAction, BenchmarkObservation]):
55
+ """WebSocket client for the Benchmark Environment."""
56
+
57
+ def _step_payload(self, action: BenchmarkAction) -> Dict:
58
+ return {"wait_seconds": action.wait_seconds}
59
+
60
+ def _parse_result(self, payload: Dict) -> StepResult[BenchmarkObservation]:
61
+ obs_data = payload.get("observation", {})
62
+ observation = BenchmarkObservation(
63
+ host_url=obs_data.get("host_url", ""),
64
+ pid=obs_data.get("pid", 0),
65
+ session_hash=obs_data.get("session_hash", ""),
66
+ waited_seconds=obs_data.get("waited_seconds", 0.0),
67
+ timestamp=obs_data.get("timestamp", 0.0),
68
+ done=payload.get("done", False),
69
+ reward=payload.get("reward"),
70
+ metadata=obs_data.get("metadata", {}),
71
+ )
72
+ return StepResult(
73
+ observation=observation,
74
+ reward=payload.get("reward"),
75
+ done=payload.get("done", False),
76
+ )
77
+
78
+ def _parse_state(self, payload: Dict) -> State:
79
+ return State(
80
+ episode_id=payload.get("episode_id"),
81
+ step_count=payload.get("step_count", 0),
82
+ )
models.py ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ Data models for the Benchmark Environment.
9
+
10
+ The benchmark environment is designed for testing concurrency and infrastructure.
11
+ Actions specify a wait time in seconds, allowing testing of parallel execution.
12
+ """
13
+
14
+ from pydantic import Field
15
+
16
+ from openenv.core.env_server.types import Action, Observation
17
+
18
+
19
+ class BenchmarkAction(Action):
20
+ """Action for the Benchmark environment - specifies seconds to wait."""
21
+
22
+ wait_seconds: float = Field(default=0.0, ge=0.0, description="Seconds to wait/sleep")
23
+
24
+
25
+ class BenchmarkObservation(Observation):
26
+ """Observation from the Benchmark environment with server identity info."""
27
+
28
+ # Server identity
29
+ host_url: str = Field(default="", description="URL of the server that handled the request")
30
+ pid: int = Field(default=0, description="Process ID of the server")
31
+ session_hash: str = Field(default="", description="Unique hash identifying this server session")
32
+
33
+ # Timing info
34
+ waited_seconds: float = Field(default=0.0, description="Actual seconds waited")
35
+ timestamp: float = Field(default=0.0, description="Unix timestamp when observation was created")
openenv.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ spec_version: 1
2
+ name: benchmark
3
+ type: space
4
+ runtime: fastapi
5
+ app: server.app:app
6
+ port: 8000
7
+
openenv_benchmark.egg-info/PKG-INFO ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.4
2
+ Name: openenv-benchmark
3
+ Version: 0.1.0
4
+ Summary: Benchmark environment for OpenEnv
5
+ Requires-Python: >=3.10
6
+ Requires-Dist: openenv[core]>=0.2.0
7
+ Provides-Extra: dev
8
+ Requires-Dist: pytest>=8.0.0; extra == "dev"
9
+ Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
openenv_benchmark.egg-info/SOURCES.txt ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ README.md
2
+ __init__.py
3
+ client.py
4
+ models.py
5
+ pyproject.toml
6
+ ./__init__.py
7
+ ./client.py
8
+ ./models.py
9
+ ./test_concurrency.py
10
+ openenv_benchmark.egg-info/PKG-INFO
11
+ openenv_benchmark.egg-info/SOURCES.txt
12
+ openenv_benchmark.egg-info/dependency_links.txt
13
+ openenv_benchmark.egg-info/entry_points.txt
14
+ openenv_benchmark.egg-info/requires.txt
15
+ openenv_benchmark.egg-info/top_level.txt
16
+ server/__init__.py
17
+ server/app.py
18
+ server/benchmark_environment.py
openenv_benchmark.egg-info/dependency_links.txt ADDED
@@ -0,0 +1 @@
 
 
1
+
openenv_benchmark.egg-info/entry_points.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ [console_scripts]
2
+ server = benchmark.server.app:main
openenv_benchmark.egg-info/requires.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ openenv[core]>=0.2.0
2
+
3
+ [dev]
4
+ pytest>=8.0.0
5
+ pytest-cov>=4.0.0
openenv_benchmark.egg-info/top_level.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ benchmark
pyproject.toml ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ [build-system]
8
+ requires = ["setuptools>=45", "wheel"]
9
+ build-backend = "setuptools.build_meta"
10
+
11
+ [project]
12
+ name = "openenv-benchmark"
13
+ version = "0.1.0"
14
+ description = "Benchmark environment for OpenEnv"
15
+ requires-python = ">=3.10"
16
+ dependencies = [
17
+ # Core OpenEnv runtime (provides FastAPI server + HTTP client types)
18
+ "openenv[core] @ git+https://github.com/meta-pytorch/OpenEnv.git@ws-in-cli",
19
+ # Environment-specific dependencies
20
+ # Add all dependencies needed for your environment here
21
+ # Examples:
22
+ # "numpy>=1.19.0",
23
+ # "torch>=2.0.0",
24
+ # "gymnasium>=0.29.0",
25
+ # "openspiel>=1.0.0",
26
+ # "smolagents>=1.22.0,<2",
27
+ ]
28
+
29
+ [project.optional-dependencies]
30
+ dev = [
31
+ "pytest>=8.0.0",
32
+ "pytest-cov>=4.0.0",
33
+ ]
34
+
35
+ [project.scripts]
36
+ # Server entry point - enables running via: uv run --project . server
37
+ # or: python -m benchmark.server.app
38
+ server = "benchmark.server.app:main"
39
+
40
+ [tool.setuptools]
41
+ include-package-data = true
42
+ packages = ["benchmark", "benchmark.server"]
43
+ package-dir = { "benchmark" = ".", "benchmark.server" = "server" }
server/__init__.py ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """Benchmark environment server components."""
8
+
9
+ from .benchmark_environment import BenchmarkEnvironment
10
+
11
+ __all__ = ["BenchmarkEnvironment"]
12
+
server/app.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ FastAPI application for the Benchmark Environment.
9
+
10
+ This module creates an HTTP server that exposes the BenchmarkEnvironment
11
+ over HTTP and WebSocket endpoints, compatible with HTTPEnvClient and WebSocketEnvClient.
12
+
13
+ Endpoints:
14
+ - POST /reset: Reset the environment
15
+ - POST /step: Execute an action
16
+ - GET /state: Get current environment state
17
+ - GET /schema: Get action/observation schemas
18
+ - WS /ws: WebSocket endpoint for persistent sessions
19
+
20
+ Usage:
21
+ # Development (with auto-reload):
22
+ uvicorn server.app:app --reload --host 0.0.0.0 --port 8000
23
+
24
+ # Production:
25
+ uvicorn server.app:app --host 0.0.0.0 --port 8000 --workers 4
26
+
27
+ # Or run directly:
28
+ python -m server.app
29
+ """
30
+
31
+ try:
32
+ from openenv.core.env_server.http_server import create_app
33
+ except Exception as e: # pragma: no cover
34
+ raise ImportError(
35
+ "openenv is required for the web interface. Install dependencies with '\n uv sync\n'"
36
+ ) from e
37
+
38
+ from benchmark.models import BenchmarkAction, BenchmarkObservation
39
+ from .benchmark_environment import BenchmarkEnvironment
40
+
41
+
42
+ # Create the app with web interface and README integration
43
+ # max_concurrent_envs controls how many simultaneous WebSocket sessions can exist
44
+ # Each session gets its own BenchmarkEnvironment instance
45
+ app = create_app(
46
+ BenchmarkEnvironment,
47
+ BenchmarkAction,
48
+ BenchmarkObservation,
49
+ env_name="benchmark",
50
+ max_concurrent_envs=1000, # Allow many concurrent WebSocket sessions
51
+ )
52
+
53
+
54
+ def main(host: str = "0.0.0.0", port: int = 8000):
55
+ """
56
+ Entry point for direct execution via uv run or python -m.
57
+
58
+ This function enables running the server without Docker:
59
+ uv run --project . server
60
+ uv run --project . server --port 8001
61
+ python -m benchmark.server.app
62
+
63
+ Args:
64
+ host: Host address to bind to (default: "0.0.0.0")
65
+ port: Port number to listen on (default: 8000)
66
+
67
+ For production deployments, consider using uvicorn directly with
68
+ multiple workers:
69
+ uvicorn benchmark.server.app:app --workers 4
70
+ """
71
+ import uvicorn
72
+
73
+ uvicorn.run(app, host=host, port=port)
74
+
75
+
76
+ if __name__ == "__main__":
77
+ import argparse
78
+
79
+ parser = argparse.ArgumentParser()
80
+ parser.add_argument("--port", type=int, default=8000)
81
+ args = parser.parse_args()
82
+ main(port=args.port)
server/benchmark_environment.py ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright (c) Meta Platforms, Inc. and affiliates.
2
+ # All rights reserved.
3
+ #
4
+ # This source code is licensed under the BSD-style license found in the
5
+ # LICENSE file in the root directory of this source tree.
6
+
7
+ """
8
+ Benchmark Environment Implementation.
9
+
10
+ A test environment for benchmarking infrastructure and concurrency.
11
+ Actions specify how many seconds to wait, allowing testing of parallel execution.
12
+ """
13
+
14
+ import asyncio
15
+ import hashlib
16
+ import os
17
+ import socket
18
+ import time
19
+ from uuid import uuid4
20
+
21
+ from openenv.core.env_server.interfaces import Environment
22
+ from openenv.core.env_server.types import State
23
+
24
+ from models import BenchmarkAction, BenchmarkObservation
25
+
26
+
27
+ def _get_host_url() -> str:
28
+ """Get the host URL for this server."""
29
+ hostname = socket.gethostname()
30
+ port = os.environ.get("PORT", "8000")
31
+ try:
32
+ ip = socket.gethostbyname(hostname)
33
+ except socket.gaierror:
34
+ ip = "127.0.0.1"
35
+ return f"http://{ip}:{port}"
36
+
37
+
38
+ class BenchmarkEnvironment(Environment):
39
+ """
40
+ A benchmark environment for testing concurrency and infrastructure.
41
+
42
+ Actions specify a number of seconds to wait (sleep), which is useful for
43
+ testing parallel execution and concurrency limits. The environment returns
44
+ identity information (host_url, pid, session_hash) to help verify which
45
+ server instance handled the request.
46
+
47
+ Example:
48
+ >>> env = BenchmarkEnvironment()
49
+ >>> obs = env.reset()
50
+ >>> print(obs.host_url) # "http://192.168.1.1:8000"
51
+ >>> print(obs.pid) # 12345
52
+ >>> print(obs.session_hash) # "a1b2c3d4..."
53
+ >>>
54
+ >>> obs = env.step(BenchmarkAction(wait_seconds=2.0))
55
+ >>> print(obs.waited_seconds) # 2.0
56
+ """
57
+
58
+ # Enable concurrent WebSocket sessions
59
+ CONCURRENCY_SAFE: bool = True
60
+
61
+ def __init__(self):
62
+ """Initialize the benchmark environment."""
63
+ self._state = State(episode_id=str(uuid4()), step_count=0)
64
+ self._session_hash = hashlib.sha256(
65
+ f"{uuid4()}-{time.time()}-{os.getpid()}".encode()
66
+ ).hexdigest()[:16]
67
+ self._pid = os.getpid()
68
+ self._host_url = _get_host_url()
69
+
70
+ def _make_observation(
71
+ self, waited_seconds: float = 0.0, done: bool = False, reward: float = 0.0
72
+ ) -> BenchmarkObservation:
73
+ """Create an observation with current server identity."""
74
+ return BenchmarkObservation(
75
+ host_url=self._host_url,
76
+ pid=self._pid,
77
+ session_hash=self._session_hash,
78
+ waited_seconds=waited_seconds,
79
+ timestamp=time.time(),
80
+ done=done,
81
+ reward=reward,
82
+ )
83
+
84
+ def reset(self) -> BenchmarkObservation:
85
+ """Reset the environment."""
86
+ self._state = State(episode_id=str(uuid4()), step_count=0)
87
+ return self._make_observation(waited_seconds=0.0, done=False, reward=0.0)
88
+
89
+ def step(self, action: BenchmarkAction) -> BenchmarkObservation:
90
+ """Execute a step by waiting for the specified seconds (sync)."""
91
+ self._state.step_count += 1
92
+ wait_time = max(0.0, action.wait_seconds)
93
+
94
+ if wait_time > 0:
95
+ time.sleep(wait_time)
96
+
97
+ reward = 1.0 / (1.0 + wait_time)
98
+ return self._make_observation(
99
+ waited_seconds=wait_time, done=False, reward=reward
100
+ )
101
+
102
+ async def step_async(self, action: BenchmarkAction) -> BenchmarkObservation:
103
+ """Execute a step by waiting for the specified seconds (async)."""
104
+ self._state.step_count += 1
105
+ wait_time = max(0.0, action.wait_seconds)
106
+
107
+ if wait_time > 0:
108
+ await asyncio.sleep(wait_time)
109
+
110
+ reward = 1.0 / (1.0 + wait_time)
111
+ return self._make_observation(
112
+ waited_seconds=wait_time, done=False, reward=reward
113
+ )
114
+
115
+ @property
116
+ def state(self) -> State:
117
+ """Get the current environment state."""
118
+ return self._state
server/requirements.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ openenv[core]>=0.2.0
2
+ fastapi>=0.115.0
3
+ uvicorn>=0.24.0
4
+
5
+
6
+
test_concurrency.py ADDED
@@ -0,0 +1,205 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python3
2
+ """
3
+ Concurrency test for benchmark environment using WebSockets.
4
+
5
+ Each WebSocket connection gets its own dedicated environment session,
6
+ enabling true concurrent execution across multiple sessions.
7
+
8
+ Run the server first:
9
+ cd benchmark && uvicorn server.app:app --port 8000
10
+
11
+ Then run this script:
12
+ python test_concurrency.py --requests 100 --wait 1.0
13
+ python test_concurrency.py -n 100 -w 1 --url wss://your-server.hf.space
14
+ """
15
+
16
+ import argparse
17
+ import asyncio
18
+ import json
19
+ import time
20
+ from dataclasses import dataclass
21
+
22
+ try:
23
+ import websockets
24
+ except ImportError:
25
+ print("Install websockets: pip install websockets")
26
+ raise
27
+
28
+
29
+ @dataclass
30
+ class RequestResult:
31
+ """Result from a single WebSocket request."""
32
+
33
+ request_id: int
34
+ wait_requested: float
35
+ waited_seconds: float
36
+ elapsed: float
37
+ pid: int
38
+ session_hash: str
39
+ host_url: str
40
+
41
+
42
+ def convert_to_ws_url(url: str) -> str:
43
+ """Convert HTTP URL to WebSocket URL."""
44
+ url = url.rstrip("/")
45
+ if url.startswith("http://"):
46
+ return "ws://" + url[7:] + "/ws"
47
+ elif url.startswith("https://"):
48
+ return "wss://" + url[8:] + "/ws"
49
+ elif url.startswith("ws://") or url.startswith("wss://"):
50
+ return url if url.endswith("/ws") else url + "/ws"
51
+ return "ws://" + url + "/ws"
52
+
53
+
54
+ async def ws_session(
55
+ ws_url: str,
56
+ request_id: int,
57
+ wait_seconds: float,
58
+ timeout: float = 60.0,
59
+ ) -> RequestResult:
60
+ """
61
+ Run a complete WebSocket session: connect, reset, step, close.
62
+
63
+ Each session gets its own environment instance on the server.
64
+ """
65
+ start = time.perf_counter()
66
+
67
+ async with websockets.connect(ws_url, open_timeout=timeout) as ws:
68
+ # Reset to initialize environment
69
+ await ws.send(json.dumps({"type": "reset", "data": {}}))
70
+ reset_response = json.loads(await asyncio.wait_for(ws.recv(), timeout))
71
+
72
+ if reset_response.get("type") == "error":
73
+ raise RuntimeError(f"Reset error: {reset_response}")
74
+
75
+ # Step with wait time
76
+ await ws.send(
77
+ json.dumps({
78
+ "type": "step",
79
+ "data": {"wait_seconds": wait_seconds},
80
+ })
81
+ )
82
+ step_response = json.loads(await asyncio.wait_for(ws.recv(), timeout))
83
+
84
+ if step_response.get("type") == "error":
85
+ raise RuntimeError(f"Step error: {step_response}")
86
+
87
+ # Close cleanly
88
+ await ws.send(json.dumps({"type": "close"}))
89
+
90
+ elapsed = time.perf_counter() - start
91
+ obs = step_response.get("data", {}).get("observation", {})
92
+
93
+ return RequestResult(
94
+ request_id=request_id,
95
+ wait_requested=wait_seconds,
96
+ waited_seconds=obs.get("waited_seconds", 0.0),
97
+ elapsed=elapsed,
98
+ pid=obs.get("pid", 0),
99
+ session_hash=obs.get("session_hash", ""),
100
+ host_url=obs.get("host_url", ""),
101
+ )
102
+
103
+
104
+ async def run_concurrent_test(
105
+ url: str,
106
+ num_requests: int,
107
+ wait_seconds: float,
108
+ timeout: float = 120.0,
109
+ ) -> dict:
110
+ """Run concurrent WebSocket sessions and collect results."""
111
+ ws_url = convert_to_ws_url(url)
112
+
113
+ print(f"WebSocket URL: {ws_url}")
114
+ print(f"Running {num_requests} concurrent sessions, each waiting {wait_seconds}s...")
115
+ print()
116
+
117
+ start = time.perf_counter()
118
+
119
+ # Launch all sessions concurrently
120
+ tasks = [
121
+ ws_session(ws_url, i, wait_seconds, timeout) for i in range(num_requests)
122
+ ]
123
+
124
+ results = await asyncio.gather(*tasks, return_exceptions=True)
125
+
126
+ total_time = time.perf_counter() - start
127
+
128
+ # Process results
129
+ successful = [r for r in results if isinstance(r, RequestResult)]
130
+ failed = [r for r in results if isinstance(r, Exception)]
131
+
132
+ if not successful:
133
+ print("All requests failed!")
134
+ for i, err in enumerate(failed[:5]):
135
+ print(f" Error {i}: {err}")
136
+ return {"error": "All requests failed"}
137
+
138
+ avg_time = sum(r.elapsed for r in successful) / len(successful)
139
+ unique_pids = set(r.pid for r in successful)
140
+ unique_sessions = set(r.session_hash for r in successful)
141
+ unique_hosts = set(r.host_url for r in successful)
142
+
143
+ return {
144
+ "num_requests": num_requests,
145
+ "successful": len(successful),
146
+ "failed": len(failed),
147
+ "wait_seconds": wait_seconds,
148
+ "total_time": total_time,
149
+ "avg_time": avg_time,
150
+ "unique_pids": len(unique_pids),
151
+ "unique_sessions": len(unique_sessions),
152
+ "unique_hosts": len(unique_hosts),
153
+ "pids": list(unique_pids)[:10], # First 10 for display
154
+ }
155
+
156
+
157
+ async def main():
158
+ parser = argparse.ArgumentParser(
159
+ description="Test benchmark environment concurrency via WebSocket"
160
+ )
161
+ parser.add_argument(
162
+ "--requests", "-n", type=int, default=10,
163
+ help="Number of concurrent WebSocket sessions"
164
+ )
165
+ parser.add_argument(
166
+ "--wait", "-w", type=float, default=1.0,
167
+ help="Wait time per request (seconds)"
168
+ )
169
+ parser.add_argument(
170
+ "--url", "-u", type=str, default="http://localhost:8000",
171
+ help="Server URL (http/https/ws/wss)"
172
+ )
173
+ parser.add_argument(
174
+ "--timeout", "-t", type=float, default=120.0,
175
+ help="Timeout per request (seconds)"
176
+ )
177
+ args = parser.parse_args()
178
+
179
+ result = await run_concurrent_test(
180
+ args.url, args.requests, args.wait, args.timeout
181
+ )
182
+
183
+ if "error" in result:
184
+ return
185
+
186
+ print(f"Successful: {result['successful']}/{result['num_requests']}")
187
+ if result["failed"]:
188
+ print(f"Failed: {result['failed']}")
189
+ print(f"Total time: {result['total_time']:.3f}s")
190
+ print(f"Avg time: {result['avg_time']:.3f}s")
191
+ print(f"Unique PIDs: {result['unique_pids']}")
192
+ print(f"Unique sessions: {result['unique_sessions']}")
193
+ print(f"Unique hosts: {result['unique_hosts']}")
194
+
195
+ # Calculate concurrency metrics
196
+ ideal_time = args.wait
197
+ actual_concurrency = (args.requests * args.wait) / result["total_time"]
198
+ print()
199
+ print(f"Ideal time (full concurrency): {ideal_time:.3f}s")
200
+ print(f"Effective concurrency: {actual_concurrency:.1f}x")
201
+
202
+
203
+ if __name__ == "__main__":
204
+ asyncio.run(main())
205
+