The Digital Privacy Model

There are some technologies and systems that are touted to protect digital privacy. It is not that these technologies do not, it is just that they do so only in certain circumstances. It is imperative to deconstruct and deliberate on these technologies in the interest of truth. But before that, a few clarifications on privacy.

Contents
  1. Encryption
  2. Open Source
  3. Decentralisation
  4. Blockchain
  5. Non-Profit Nature
  6. What can ensure digital privacy?
    1. Business Model
    2. Legal Protection

Encryption

Encryption has been touted to protect privacy from all threats. Know that service providers can know your data if they choose to irrespective of end to end encryption. This is because, whoever encrypts or decrypts your data has access to your unencrypted data — which is why they are able to encrypt or decrypt your data in the first place. Therefore, end-to-end encryption does not by design ensure privacy from the entity who encrypts and decrypts your data; which is most cases are technology companies. In these cases, encryption only ensures privacy while in transit; not from the application you use or their developers. A user simply has to trust the word of the organisation or the developers that the application will not leach data before encrypting or after decrypting.

Many organisations offering end to end encrypted services say that they cannot see your message. This is false — they indeed can see your message if they choose to because they are the one encrypting and decrypting data; thus having access to unencrypted data. They must instead say that they don't choose to see the message, which brings back the argument to merely trusting the organisation.

Open Source

Contrary to popular belief, the act of open sourcing a code does not ensure privacy. While it is possible to audit an open source code and trust it, there is no guarantee that the open sourced code is the same code running in the servers or complied for downloads. It is possible that a modified version of the open sourced code running in the servers or compiled for downloads. It is also possible that there are other programs running in parallel performing malicious activities. We cannot trust or distrust software or an organisation simply because the code is open sourced. Open sourcing must not inspire trust.

Only if you audit the code yourself, compile the software and install it yourself, you can trust the code. But this implies that you don't trust the organisation or the project maintainers who have already compiled the code and either made it available to download or is running in their servers; which makes self-compiling more of a trust-less move, than a validation of trust of open sourced code.

In the case of open sourced code running as cloud services, only if you have audited the web server configurations and the code base on the server (perhaps using a read only user account), can you even remotely considering trusting such a service by design.

However, open sourcing can improve security because more brains are generally better than few; they help you modify the software to fit any specific need; and although they do not imply trust by nature, they draw some trust worthiness from its community, whose motives are not threatening or toxic to users, but instead are based on the values of the free software philosophy and not doing anyone harm. This inspires trust in them which, due to association, is transferred to their products as well. To put it in other words, the flow of trust is reversed from the general belief. You are not trusting the organisation or community because the code is open sourced, instead you are trusting the code because of the motives and principles of the organisation or community. In fact, this transfer of trust — that if you trust the maker, you can trust the product — is seen in all products, including those that are created by for-profit corporations or closed sourced code, thus reiterating my point that open sourcing does not inspire trust.

Decentralisation

Decentralised systems will not replace centralised systems: they both will run the web together depending on use cases. Neither of them are good or bad — tool or technologies don't have any intrinsic good or bad nature in them. What we use them for, have good and bad nature.

A true decentralised model is when there is no central party at all, even to facilitate the communication or storage — data is storage on your own device and communication is established device to device.

Today, decentralisation is an umbrella term consisting of various decentralised models. Let's start with federation. They are basically servers hosted by a person or a group with their own user base and able to communicate with other such servers. In other words, they are a scattered centralised model that allows cross communication.

From the privacy point of view, federated systems worsen the problem. Unlike the common perception, they too run on the basis of trust, requiring you to trust not just those running your servers but also those running other servers in the fediverse which you are connecting with. But you can trust servers only if you know the people running them, which you don't. This naturally restricts one's trustees to a small number, unless one decides to rely on mutual trust; ie, D runs the server, A knows B, B knows C and C knows D; therefore A trusts D. But such chain of trust is not truly trust but calculated risks, and federation simply adds up the number of parties you need to trust.

Also, if any malicious activity is caught, there is no guarantee that we can hold D accountable. Many of those who host the federated servers, apparently, are not legal companies or organisations but individuals or communities, thus making it is harder to hold them legally accountable compared to a legally registered organisation.

There is no privacy by technical design in fediverse. The buzz around federated system is because of the contrast principle. The big technology companies running centralised architectures lied, stole and abused user data, and garnered the image of an evil corporation among many technologists, inspiring them to create a model that does not require to trust such entities — a natural reaction. But it turns out that in order to escape the clutches of corporations, developers created a system that requires users to blindly trust more entities now.

Blockchain

Blockchain too, worsens the issue of privacy. Blockchain is merely a database system whose data is virtually impossible to modify because of two reasons, of which, the distributed nature is makes blockchains an aggravator of privacy issues. Take for instance, the case of block chain based domains, whose registry is hosted in multiple computers in a decentralised manner. The biggest privacy issue with domain of any kind is the registrant's personal information made public.

Assuming that data is encrypted, decentralised models like blockchain still makes it easier to access the encrypted data. Note that you must have access to two objects to decrypt encrypted data — the encryption key and the encrypted data itself. Now that the encrypted data is delivered to you, only getting the encryption is key left. The point is: if you have to protect data, it makes sense to derange and restrict access to the data itself.

Blockchain must be used according to use cases. It is useful in situations where data must be non-tamperable at any cost and where privacy does not matter — such as the financial records of government spending or non-profit organisation and election votes.

Non-Profit Nature

There is a perception that only non-profit organisations can be trusted with personal data. I disagree and believe that this trust is nothing but a perceived trust.

The primary argument for trust on non-profits is that for-profit organisations have share-holders to whom they have the responsibility of profit dividends. This is true, but this does not imply that organisations will break their ethos (if privacy matters to them) or the law (if laws protect privacy) in order to pay the investors. In fact, investors investing in a privacy focused organisation or in any organisation functioning in a jurisdiction where privacy is protected by law or constitution knows that the organisation will not or cannot make money by flouting privacy. Therefore, the responsibility to pay investors cannot be touted as a threat privacy.

Every organisation requires to be profitable; even non-profits to survive and grow. If profit is the motive to abuse data, then the same motivation is applicable to non-profits too. It is not profit or the strive for profit that motivates data abuses, but the values of the organisation and ethos of the management. Mozilla is a great example of a non-profit who strayed away from privacy in order to survive: they included Google search as their default search engine for a price. On the other hand, Lavabit is a great example of a for-profit who closed down their business to stay true to their privacy promise.

Some say that the laws governing non-profit organisations put a lot of restrictions on the money flow, thereby ensuring trust. But these laws are fundamentally created to prevent accounting frauds. There are tax benefits and incentives to non-profit nature to benefit which, many for-profit entities tries to masquerade as non-profit entities. From a legal perspective, it can be argued that non-profits have such laws and for-profits don't have. But, these laws can be circumvented by cunning accountants. A set of laws cannot immunise an organisation from the exhaustive list of frauds and exploitations. A thief always finds a way.

However, the laws may prevent the damage if a non-profit chooses to go rogue. But here, one cannot say that non-profit laws instil trust. Trust is either earned or broken based on incidents, not based on the gain or damage. The moment a management goes rogue, trust is broken irrespective of damage control. Damage can only be a determinant of whether the broken trust can be fixed or not, or perhaps for legal discourses and settlements. In other words, whether your cheating on your partner resulted in pregnancy or not, trust on you is broken. It is in the act of cheating where trust is broken.

There is a more convincing argument that non-profit laws are of no use to protect privacy. Privacy is not always abused for tangible profits. Corporations can abused privacy so that they can build and improve other products and services, which reaps profit at later stages. In such use cases, there is no way proper way to quantify the monetary value of privacy abuse. Governments or law enforcement agencies have spied their citizenry to prevent crime, control dissent, narratives, etc. Here too, there is no proper way to monetarily quantify the abuse. Therefore, when majority of the privacy abuses are not intended for direct and quantifiable profits, I do not think that non-profit laws that prevent a company from making x amount of money in a year or mandate them to spend y amount of money in a year, prevent privacy abuses. These laws are fundamentally for accounting purposes to prevent for-profit companies masquerading and exploiting the non-profit laws.

Moreover, these non-profit laws does not prevent non-profit organisations from changing their business objectives, models, processes, values and ethos. Nor do they give users any say over such changes. So, such laws are not of much help for privacy. Hence, my belief that increased trust on non-profit ventures is a perceived trust. Both can go rogue. It is the core values and principles of communities or organisations that guide their moral or ethical character and prevent them from conducting fraud.

What can ensure digital privacy?

It is the combination of business model, technical design and legal protection that can provide the general public the maximum protection from privacy abuses. I have pondered on various models and I haven't come across anything as powerful as this combination.

Business Model

A new business model that is not built on user data but on something other revenue stream will eliminate the motives for data abuses in the first place. There will be no incentives etched in the business model itself to abuse user data. Consequentially, the technical designs for such business models will be privacy friendly because building user tracking and data analysis tools require resources.

Speaking of laws, European countries like Sweden and Switzerland are considered to have the best privacy laws in the modern world. But laws can be changed if majority of the legislators want it. Any ensuing citizens' protest will take time to yield. Whatever needs to be compromised, can be compromised during this interval.

India is one of the best countries for personal privacy. What makes India different from the rest is the status given to privacy by the Supreme Court of India — privacy is a fundamental right, and it is harder to mess with fundamental rights.

The right to privacy is protected as an intrinsic part of the right to life and personal liberty under Article 21 and as a part of the freedoms guaranteed by Part III of the Constitution.

Supreme Court of India

Suppose a law has been passed in the Lok Sabha by the government challenging users' privacy, such laws can be easily challenged judicially on the basis of the fundamental right. We could even get a stay on the implementation of the law in the event the case takes a long time. The biggest validation of this is the Aadhar case.

In the event when privacy conflict with another fundamental right, I believe the right that upholds the spirit of justice must supersede the other. Take for instance an investigation into a human trafficking racket by intercepting the communications and meetings of one established offender or a convincing suspect, which are indeed protected by his right to privacy, but contributes further into human trafficking, which is a violation of right to life and liberty. Here, I believe that the trafficker's right to privacy must be infringed considering that he is an established offender or a convincing suspect.

This logic, however, does not justify mass surveillances of citizens. Right to privacy cannot be challenged or compromised for finding possible criminals. I do support mass surveillances on account of prevention of a crime.