Zero Trust Sockets

Simplify Network Security With Atsign’s Networking 2.0 Technology


Zero Trust Architecture is the current mantra of security papers and government organizations like NIST in the USA.  The IT industry is keen to jump on the ZTA bandwagon, along with other hot technologies like artificial intelligence (AI).

The architecture is sound, but implementing it is increasingly complex. Business rules have to be translated into archaic technology rules, such as cumbersome firewall rules or complex infrastructure configurations. Like self-driving cars, self-securing networks seem possible—but sadly, the more we and our AIs learn, the further away they seem. With the constant onslaught of security breaches, we need technology that we can use immediately.

There is another approach that is simpler and starts from the most fundamental building block of IP networks: the venerable socket.

Fifty years ago, in 1974, the TCP/IP protocol was revealed to the world in Request For Comments (RFC) 675, later to be followed up by RFC 791(IP) and RFC793(TCP). Sockets were, and still are, a way of multiplexing a single physical connection into many logical connections. A single IP interface can use sockets to have many simultaneous connections. We take this for granted now, but having multiple people logged in at the same time using a single physical connection was significant at the time.

Today, some of the socket numbers used in TCP have become “well known” —for example, port 80 for HTTP (Web), port 25 for SMTP (eMail), and 23 for Telnet (command line). Though most have been superseded by their network-encrypted counterparts: 443 (HTTPS), 587 (SMTPS), and 22 (SSH).

The basic premise of a socket is that they come in pairs: a listening (inbound) socket and a connecting (outbound) socket. Once they meet, you have a socket connection, and two machines can communicate. This also is the Achilles’ heel of sockets. A process needs to be listening and reacting to connections. If that code is not perfect, bad actors can take advantage of this weakness. When this happens, it can be catastrophic. So, over time, the industry created tools to mitigate this threat. These tools include firewalls that permit or deny access to listening sockets depending on rulesets, and VPNs to create tunnels that allow sockets to be connected  over hostile networks.

These tools have served us well, but they too have problems. The rules need to reflect what is required at any one time, but they also are, at best, fiddly. All too often, firewall rules are wrong or, even worse, never get updated. Rules tend to get added and never removed, resulting in backdoors and weaknesses that are becoming easier to exploit, as we have seen with all the recent cyberattacks.

The industry has reacted, of course, with a seemingly never-ending set of products that help mitigate the issues and add increasingly complex rules. So, we can expect to see real-time, AI-powered systems hungry for data like log files, network taps, and communication back to control points like firewalls. Over the years we have watched as vast amounts of money and brainpower have been applied to this approach, and it’s becoming increasingly clear that if there were a ‘there’ there, we’d have solved this problem already. Networks need to be secured. The frameworks that NIST and others have created provide us with valuable clues and a foundation to build upon.

NIST ZTNA Framework

If the basic fundamentals of a TCP/IP network are sockets, then we should be thinking about how to apply Zero Trust principals at that level. First, no ports should ever be listening where data is held. So, the client and the server should only make outbound connections. This means, in effect, that both client and server have no TCP/IP attack surfaces. In this case, if you used a network scanner (e.g. nmap), you would find nothing to attack.

This sounds great, but as mentioned above, TCP/IP requires two types of sockets: the listening socket and the connecting socket. Here, we only have two connecting sockets. We need a third actor in the TCP connection, something that can act as the connection point. This third actor must have listening sockets that first authenticate and then connect the two TCP connections. This third party has no data; it just authenticates and connects TCP/IP sockets. Once connected, each endpoint can also mutually authenticate. Then the connection is established and the normal data flow can proceed. Solved? In principle, yes, but in reality, things are a little more complicated.

How would this third party authenticate a connection? Each TCP/IP connection would need to authenticate; this perhaps could be done with SSL certificates, but managing client certificates at scale is not something anyone wants to do. What is needed, and what is mentioned in the NIST framework, is a “control plane”—a control plane that is available across the Internet and end-to-end encrypted. With such a control plane, endpoints could create their own cryptographic keys pairs, advertise the public keys, and authenticate and send ephemeral keys to each other ready to use over the TCP connection established via the third-party TCP relay.

With the use of the control plane at this point, a TCP connection goes from client to the relay service, and another TCP connection goes from the server to the relay service. The ports on the relay service and the authentication method are agreed to over the control plane. Although great in concept, it would require every program to be rewritten to include this protocol. While it is possible, the uptake would likely be glacial at best. So, we have one last problem to solve: how to connect existing programs and services.

In networking there are interfaces—you likely are familiar with ethernet or wifi—each of which has a unique IP address where packets are sent from. We do not want to have listening ports on these external interfaces, as mentioned above, as that creates an attack surface. Instead, every IP stack has a special interface that is always up and running but only available to processes on the local machine. That interface is called localhost and typically has the IP address of

Using localhost, we can bind services we want to connect to this localhost interface on the server and connect to localhost on a spare port. The TCP/IP connection that is made via the relay service can then be connected to the service on localhost on the server, and the connection made to a spare port forwarded in a similar manner.

At this point, we have a service, for example sshd (SSH daemon), on a server and ssh on a client machine connecting to a port on localhost. These two connections are connected in turn via the TCP relay point, which itself is end-to-end encrypted.

The net result is that we have a ssh session from a machine with no ports listening on external interfaces connecting to sshd (SSH daemon) server which also has no ports open, via a relay that has zero knowledge of the cryptographic keys being used.


At every stage, Zero Trust principles are used. As an added bonus, outbound network address translation (NAT) is used at both ends, so no fixed IP addresses are ever needed.

I suspect you might be thinking this sounds complex and it will take years to code and perfect. Well, you’re right. it did. But the future is now. Take a look at and the underlying control plane built by Atsign.

Atsign uses a new protocol, the atProtocol, that has addresses called atSigns. Each atSign can be owned by a person, or assigned to an entity or thing, and each atSign “cuts” its own cryptographic keys. In effect, Atsign has created a new way of transporting data securely over the roads of the Internet: sockets. They have created not an overlay network, but a new inlay network that requires nothing to change but will change everything. Atsign’s platform is open-source and built for scale; see the website ( and GitHub pages ( for more detail.

What do we call this approach? Zero Trust Sockets. Surprisingly easy to implement, Atsign’s technology dissolves many of the complexities of modern networks. Remarkably, it achieves this without modifying the core of the IP network, allowing for a project-by-project approach to both disruption and protection.

This seemingly small change has a ripple effect with significant consequences. Servers no longer need to see data unencrypted during connections, potentially disrupting business models like Facebook’s that rely on such access. Network segmentation is inherently built into this approach, as each TCP connection becomes its own encrypted tunnel. Implementing the TCP relay in silicon could revolutionize data centers. Additionally, business rules can be directly applied to data flows, eliminating the need for complex network rule translations, streamlining projects and ensuring security and privacy by design. And the best part? Servers remain invisible on the Internet with no open ports, yet authorized clients can still access them with the right cryptographic keys.

Ready to see how Atsign’s technology can transform your network security? Visit to learn more.

Why Open Source

Atsign technology has been open source from day one. See exactly why open source embodies the values we hold as a company.

read more
Share This