Intro
We all use the internet daily. As cyber attacks are becoming much more frequent ( See the latest facebook hack), it is evident that we must strive towards a decentralized network model – to protect our private lives, exploitable information and increase the efficiency of our internet use.
Protocol Labs developed a peer-to-peer protocol (similar to the bitTorrent model) by addressing the content you are trying to download, it downloads different bits and pieces in other nodes (computers.) in the area, to deliver the data as fast as possible. The developers behind IPFS, are also the people behind the FIL coin that has seen a meteoric rise in the last two weeks on Binance.
But even if you are not investing directly in the Cryptocurrency world, you cannot stay blind to the ongoing developments as we stride towards the Web 2.0 – a much more sophisticated, faster and safer internet system. IPFS is being discussed as one of the important technologies that could work alongside existing protocols or even replace them entirely.
Lets Get Technical Here
Currently, most of the web uses the HyperText Transfer Protocol(HTTP) to store and access data. The HTTP protocol uses your devices IP address to make the request to a specific server-location, based on the files’ storage location. Much like having to go to your own storage to access your bicycle.
Thus, creating a very much centralized system, whereby most of the files are concentrated in a handful of servers that are able to process all of the massive requests.
If one of the servers’ goes down, you cannot access the data on it whatsoever.
And this is where IPFS comes in to play, where HTTP is slacking behind, IPFS is able to make up for it, as it firstly, does not access one single server. It will send a request to all of the nodes ( active devices on the network) that contain bits and pieces of the whole data we are trying to download – much like bittorrent. Remember the bicycle example? So imagine thousands of engineers assembling it in real time instead of having to drive all the way downtown for that bike shop.
So as long as the nodes in the network have the file, it is accessible. If one node goes down, others are able to step up and replace it, preserving file integrity and preventing censorship.
Thus, we benefit from:
- Faster bandwidth, faster streaming speeds as we do not rely on a single connection to transfer the packets (the data).
- HTTP gives full control over the server and all the data it contains. IPFS distributes the control to all the nodes individually. No one has exclusive control over the data as many copies of it exist on other nodes, preventing censorship and centralization.
Current HTTP-server model is an excellent vector of attack for cyber attacks, as it has a single-point of failure, whereby we could find an attack vector that compromises the server, and thus all the information stored on it. IPFS is intended to be implemented on the blockchain, where there is no single-point failure due to consensus protocol.
The Driving Force ..
Apart from Protocol Labs own teams working on implementing the protocol on Ethereum, we also have plenty of major-companies like Netflix looking into these technologies in an attempt to improve streaming capabilities and latency.
Other corporate giants are also working towards their own implementation of such a protocol, looking to benefit from the fast streaming speeds at the moment, and not so much as decentralisation or security. Lets review some common issues with the protocol as of this date..
How secure is the IPFS?
So for any peer-to-peer system, one should wonder how secured is protocol that allows such massive exchange of bits between millions of supposedly, anonymous nodes.
When you upload any data using IPFS, the protocol produces a special hashcode using DHT (Distributed Hash Table) – which acts as a key to the specific data, be it a file, a video or just text. However, when the hash is generated, all of the nodes on the IPFS network are getting a broadcast from the origin node, ‘ Hey I now have this file!’.
With DHT announcement, the data is accessible to any node that received the broadcast. So if we are talking on a particular live blockchain network for a example, that would mean the entire network could access the file, which isn’t really the problem.
The problem itself is two-fold:
- DHT announcements expire, however the companies’ offering the services can and have high incentive to monitor their DHT announcements to track what data is being uploaded to the network and by what node. Thus, removing anonymity and destroying decentralization.
Similar to DHT upload data announcements, requests from users can also be tracked and logged – so that we have access to what machine accessed what data.
In reality, if one was to track and log these activities on a public network, they would need to either target particular nodes in the network or use some serious computing power to log all the traffic, which is not impossible for large tech companies, especially those who currently profit off selling our information to marketing agencies and governments.
While I think IPFS can be a great solution for the Web 2.0 , I would love to see the teams address the security issues as they seem to be quite the same with the current state of the net. While scanning packets in the IPFS network isn’t such a novel idea, the solutions for security are quite the same – Encryption, Gateways, and of course private networks.
While private networks aren’t the main subject of this article I will skip that part for now.
Security Measures
- Encryption – Current encryption algorithms vary, but the most trusted among them is definitely the AES algorithm. Without going into too much detail, it is one of the most secure algorithms that many national agencies use to protect their sensitive information.
It is common practice in SSL/TSL tech, wireless network, mobile network and many more. If its good enough for the NSA, its good enough for you. (Unless you got a quantum computer hidden in your garage). - Gateways – A gateway is a network node, located at the far edges of (the galaxy.) the network, thus, controlling all inbound and outbound traffic. It is also used to interconnect two computers that are sending different transmission protocols. Basically, you are just handing your letter (data, message, file) to an errand boy (the gateway) entrusting him to deliver, and the recipent can only know of the errand boy, and not you.
1While gateways are poor in scalability as they can bare only certain bandwidth at a time, they might be a solution for you – should you ever need to converse with a IPFS network.
Also, its worth mentioning that the gateway node can be configured to log any activity on it, so be mindful of that
Perhaps a slow implementation that compliments HTTP is best at the moment, but I am very curious to see what direction will the development teams will take in regards to varying issues. I am certain that it is a great improvement for the decentralisation of the web, but whenever you interact with a public network, acknowledge that you could be a potential target.