Accelerating Cloud Servers with Local Intranet Proxy

Introduction⌗
Recently, while deploying services for a client, I discovered that Docker Hub, which had just recovered, is inaccessible again, and the Docker installation program also cannot be accessed normally. This is very frustrating - what should have taken just a few minutes ended up taking fifteen to twenty minutes.
So I spent time looking for solutions, and currently, the most effective ones are as follows:
- Using Cloudflare to set up a Docker Registry Proxy
- Implementing a more universal proxy based on existing proxies
- Downloading Docker images locally, then through a series of operations: exporting, uploading to the server, importing to the server, and renaming the tags
Before this, I had been using the third method for installation and deployment. Although there were no issues, it was too cumbersome, requiring me to pull AMD64 architecture images each time and then delete the local images after deployment.
Why not use the first method? Mainly because it requires replacing the docker.io
prefix of the images with your own domain name, which is also quite troublesome in projects with complex service orchestration.
So I wondered if I could use a TCP Tunnel to allow the cloud server to use my local Surge Proxy as a proxy. After determining the approach, I started testing and found that the results were quite good after several rounds of testing.
However, this approach has certain requirements for the intranet bandwidth and the server’s upload bandwidth; otherwise, the speed is not very ideal!
Requirements⌗
- A server with a public IP address in China
- Local access to tools for bypassing internet restrictions
Deploying the Expose Service⌗
For this part, you can refer to my previous article “Private Deployment of Expose to Implement Intranet Tunnel”, so I won’t repeat it here.
The previous article did not cover the TCP Tunnel part. Here, you need to expose the TCP Tunnel port according to your server configuration!
services:
expose:
image: beyondcodegmbh/expose-server:latest
ports:
- 0.0.0.0:8888-8890:8888-8890/tcp
labels:
- traefik.enable=true
- traefik.http.routers.expose.tls=true
- traefik.http.routers.expose.tls.certResolver=example
- traefik.http.routers.expose.rule=Host(`example.com`)
- traefik.http.routers.expose.service=expose
- traefik.http.routers.expose.entrypoints=http,https
- traefik.http.services.expose.loadbalancer.server.port=443
- traefik.http.routers.tunnel.tls=true
- traefik.http.routers.tunnel.tls.certResolver=example
- traefik.http.routers.tunnel.tls.domains[0].main=example.com
- traefik.http.routers.tunnel.tls.domains[0].sans=*.example.com
- traefik.http.routers.tunnel.rule=HostRegexp(`^.+.example.com$`)
- traefik.http.routers.tunnel.service=tunnel
- traefik.http.routers.tunnel.priority=5
- traefik.http.routers.tunnel.entrypoints=http,https
- traefik.http.services.tunnel.loadbalancer.server.port=443
restart: always
volumes:
- ./config/expose.php:/src/config/expose.php
- ./database/expose.db:/root/.expose
hostname: expose
networks:
- traefik
extra_hosts:
- host.docker.internal:host-gateway
environment:
port: 443
domain: example.com
username: YOUR_USERNAME
password: YOUR_PASSWORD
container_name: expose
networks:
traefik:
external: true
Modify the service configuration. By default, the TCP Tunnel port range is 50000-60000. I don’t need that many, so I set it smaller.
<?php
return [
......
'admin' => [
/*
|--------------------------------------------------------------------------
| TCP Port Sharing
|--------------------------------------------------------------------------
|
| Control if you want to allow users to share TCP ports with your Expose
| server. You can add fine-grained control per authentication token,
| but if you want to disable TCP port sharing in general, set this
| value to false.
|
*/
'allow_tcp_port_sharing' => true,
/*
|--------------------------------------------------------------------------
| TCP Port Range
|--------------------------------------------------------------------------
|
| Expose allows you to also share TCP ports, for example when sharing your
| local SSH server with the public. This setting allows you to define the
| port range that Expose will use to assign new ports to the users.
|
| Note: Do not use port ranges below 1024, as it might require root
| privileges to assign these ports.
|
*/
'tcp_port_range' => [
'from' => 8888,
'to' => 8890,
],
......
],
];
In addition to these configurations, you also need to add corresponding port access policies in your cloud service provider’s security group settings!
Creating a TCP Tunnel⌗
After completing the configuration and starting the service, you can run the following command locally to create a TCP Tunnel:
$ expose share-port 6152
Thank you for using expose.
Local-Port: 6152
Shared-Port: 8888
Expose-URL: tcp://example.com:8888
When the above output appears, the Tunnel has been successfully created. Next, use the proxy on your server!
Usage⌗
After completing the above operations, on other servers, you just need to use this TCP Tunnel like a regular proxy.
$ export HTTPS_PROXY=http://example.cmo:8888
# Verify if it's working
$ curl -I https://www.google.com
HTTP/1.0 200 Connection established
HTTP/2 200
......
Conclusion⌗
This method is only suitable for personal use with a small number of servers. If you need to frequently pull images or access other resources, it’s recommended to choose other more effective solutions!
I hope this is helpful, Happy hacking…