Custom Port Mapping and Service Exposure
How to expose services inside HyperAI containers to external networks
Feature Overview
The "Model Training" feature of HyperAI provides custom port mapping functionality, allowing services running inside containers (such as Web applications, API services, visualization tools, or large language models) to be exposed to external networks. This functionality is one of the core features of computing containers, applicable to various scenarios including workspaces and batch tasks, specifically designed to enhance development experience and service deployment efficiency.
:::important Real-name Verification Requirement Using the custom port mapping feature requires completing real-name verification. Users who have not completed real-name verification will be unable to use this feature. Please ensure you have completed the real-name verification process before use. :::
This feature supports:
- Simultaneously exposing multiple ports inside Gear containers
- Accessing services through formatted subdomains
- Supporting multiple protocols including HTTP, WebSocket, etc.
Configuring Port Mapping
Setting During Gear Container Creation
When creating a new Gear container, you can configure the ports to be exposed in the "Port Mapping" section:
Configuration steps:
- On the "Create Container" page, locate the "Port Mapping" section
- Click the "Configure Port Mapping" button
- Fill in the following information:
- Mapping Name: Unique identifier for the port mapping
- Port Number: Port that the service listens on inside the container
 
:::caution Note Mapping names cannot be duplicated across different containers, otherwise an error will occur. :::
Port Mapping Restrictions
- Reserved ports (cannot be mapped): 22, 5901, 6006, 6637, 7088, 8888
- Each Gear container can configure up to 5 custom port mappings
Access Methods
After the Gear container status shows "Running," the system will automatically generate access addresses.
Subdomain Access
The subdomain access format depends on whether a port mapping name was provided:
When a Mapping Name is Provided
https://{user}-{port_name}.{domain}Parameter descriptions:
- {user}- Username
- {port_name}- Name specified for the port mapping
- {domain}- Base domain of the HyperAI instance
Example: If the username is openbayes, the port mapping name is gradio, and the base domain is gear-c1.openbayes.net, the access address would be:
https://openbayes-gradio.gear-c1.openbayes.netWhen No Mapping Name is Provided
https://{user}-{jobid}-{port_number}.{domain}Parameter descriptions:
- {user}- Username
- {jobid}- Job ID of the Gear container
- {port_number}- Internal port number of the container
- {domain}- Base domain of the HyperAI instance
Example: For username openbayes, container jobid 84xr8451agrz, internal port 7860, and base domain gear-c1.openbayes.net, the access URL is:
https://openbayes-84xr8451agrz-7860.gear-c1.openbayes.netView Configured Access URLs
The Gear container details page displays all configured port mappings and their access URLs:
Application Examples
Deploying a Gradio Application
Here are the steps to deploy and expose a Gradio application on HyperAI Gear:
- Install Gradio:
pip install gradio --user- Create the application file app.py:
import gradio as gr
def greet(name):
    return "Hello " + name + "!!"
demo = gr.Interface(fn=greet, inputs="text", outputs="text")
# Must bind to 0.0.0.0 to be accessible from outside
demo.launch(server_name="0.0.0.0", server_port=7860)- Run the application:
python app.py- Access the application through the previously configured port mapping (assuming port 7860 has been configured with a "gradio" port mapping).
Deploying Large Language Model API Services
vLLM Service
# Start service with custom port
vllm serve <model> \
    --host 0.0.0.0 \
    --port 8080SGLang Service
# Start service and specify port
python -m sglang.launch_server \
    --model-path <path> \
    --host 0.0.0.0 \
    --port 8080Accessing LLM Services via OpenAI Compatible Interface
If the LLM service (such as vLLM, SGLang) deployed in Gear provides an OpenAI-compatible API, you can access it as follows:
- Get the list of available models:
https://{user}-{port_name}.{domain}/v1/modelsOr (if no mapping name is specified):
https://{user}-{jobid}-{port_number}.{domain}/v1/models- Access using Python OpenAI SDK:
from openai import OpenAI
# Choose the corresponding base URL based on your access method
# With mapping name:
base_url = "https://{user}-{port_name}.{domain}/v1"
# Without mapping name:
# base_url = "https://{user}-{jobid}-{port_number}.{domain}/v1"
model_name = "your-model-name"  # Model name obtained from /v1/models
client = OpenAI(
    base_url=base_url,
    api_key="EMPTY"  # Many open-source model services don't require a real API key
)
# Send request
response = client.chat.completions.create(
    model=model_name,
    messages=[
        {"role": "user", "content": "Hello"}
    ]
)
print(response.choices[0].message.content)Comparison: Port Mapping vs SSH Tunnel
Advantages and disadvantages of custom port mapping compared to SSH tunnel method:
| Feature | Custom Port Mapping | SSH Tunnel | 
|---|---|---|
| Configuration Method | UI interface operation | Command line and SSH keys | 
| Multi-port Support | Supports exposing multiple ports simultaneously | Each port requires separate configuration | 
| Persistence | Available throughout Gear runtime | Requires maintaining SSH connection | 
| Access Scope | Directly accessible from public network | Only accessible on machines with SSH tunnel | 
| Protocol Support | HTTP, WebSocket, etc. | All TCP protocols | 
| Security Mechanism | HTTPS encryption | SSH encryption | 
Security Considerations
When using custom port mapping in "Model Training", please note:
- Exposed services can be accessed by anyone who knows the URL, unless authentication is implemented in the application
- Avoid exposing services containing sensitive data
- It is recommended to implement appropriate security measures in the application (such as authentication and authorization)