Upstream services connect to Pico and register endpoints. Pico will then route requests for an endpoint to a registered upstream service via its outbound-only connection. This means you can expose your services without opening a public port.
Pico runs as a cluster of nodes in order to be fault tolerant, scale horizontally and support zero downtime deployments. It is also easy to host, such as a Kubernetes Deployment or StatefulSet behind a HTTP load balancer.
Related -- we also built a simple (but not production-grade) tunneling solution just for devving on our open-source project (multiplayer game server management).
We recently ran in to an issue where we need devs to be able to have a public IP with vanilla TCP+TLS sockets to hack on some parts of our software. I tried Ngrok TCP endpoints, but didn't feel comfortable requiring our maintainers to pay for SaaS just to be able to hack around with our software. Cloudflare Tunnels is awesome if you know what you're doing, but too complicated to set up.
It works by automating a Terraform plan to (a) set up a remote VM, (b) set up SSH keys, and (c) create a container that uses reverse SSH tunneling to expose a port on the host. We get the benefit of a dedicated IP + any port + no 3rd party vendors for $2.50/mo in your own cloud. All you need is a Linode access token, arguably faster and cheaper than any other reverse tunneling software.
Source: https://github.com/rivet-gg/rivet/tree/main/infra/dev-tunnel
Setup guide: https://github.com/rivet-gg/rivet/blob/main/docs/infrastruct...
I'll try to get this merged today.
As someone suggested below, I'll rename to 'Piko'
Can the client still talk to the service nodes? Is this over the same tunnel, or does the agent need to create a new tunnel? What happens to requests that are sent from a proxy-client to the service nodes during this transition?
Or at a much higher level: Can I deploy new service nodes without downtime?
So if you have a single upstream for an endpoint, when the upstream reconnects there may be a second where it isn't connected but will recover quickly (planning to add retries in the future to handle this more gracefully)
Similarly if a server node fails the upstream can reconnect
https://www.haproxy.com/blog/announcing-haproxy-2-9#reverse-...
https://datatracker.ietf.org/doc/draft-bt-httpbis-reverse-ht...
Or piko implements custom upstream registering and require a custom application code to handle this channel?
We did this as Ziti provides a platform to develop secure by default, distributed applications quicker, which is why zrok has been built by only 1 developer across about 18 months and is almost feature parity with Ngrok (which has been developed by many people for almost 10 years).
...so, basically:
wget https://get.openziti.io/dock/all-in-one/compose.yml docker compose up
Piko is also designed to be easier to host, so can be hosted behind a HTTP load balancer. That does mean Piko is currently limited to HTTP only, but that seemed a worthwhile tradeoff to make it easier to host
Yes [1]
You could try IPv6.rs (shameless plug). We provide a routed IPv6 IP and reverse proxy for IPv4. We made it easy to run servers with Cloud Seeder [2], our open source server manager.
[1] https://blog.ipv6.rs/understanding-tls-mitm-and-privacy-poli...
podman-kube-play: https://docs.podman.io/en/latest/markdown/podman-kube-play.1...
helm template: https://helm.sh/docs/helm/helm_template/
"RFE Allow podman to run Helm charts" https://github.com/containers/podman/issues/15098#issuecomme...
helm template --dry-run --debug . --generate-name \
--values values.yaml | tee kube42.yml && \
podman kube play kube42.yml;is there a socks5 proxy or something I can configure in my web browser?
Pico is a reverse proxy, so the upstream services open outbound-only connections to Pico, then proxy clients send HTTP requests to Pico which are then routed to the upstream services
So as long as your browser can access Pico it should work like any other proxy
(Theres a getting started guide if that helps: https://github.com/andydunstall/pico/blob/main/docs/getting-...)
But doesn't the Pico cluster have to expose a public port?
- If your trying to access a customer network (such as for BYOC), exposing a public port in the customer network is likely a no-go (or would require complex networking to setup VPC peering etc)
- The Pico 'proxy' port doesn't need to be public (and in most cases won't be), such as you can only expose to clients in the same network (which is one of the benifits of self-hosting)
- The Pico 'upstream' port (that upstream services connect to) will usually need to be public, but that can use TLS and has JWT authentication