Skip to content

Our Concerns on Blockchain – Please prove us wrong

before reading on view this excellent Youtube TedX talk on the value and benefit of Blockchain:

Note the speakers view on uncertainty addressed thru

1.Transparency of who you are dealing with, user controlled personal identity tool, visibility of transaction chain
2. Control of what aspect of identity you share to enable a transaction
3. Remedy/Accountability – she related back to transparency/control (bit weak as it assumes legislation actionable response capacity on part of the individual) but right idea
These 3 elements are the building blocks of Trust.
She said “What keeps the blockchain verified is our mutual distrust” – “converting uncertainties into certainties”
All good…now the downside – remember – blockchain is a PUBLIC distributed database:
a. Blockchain is in a massive hype cycle mostly based on most peoples lack of understanding of the underlying tech.
b. The tech relies on encryption as the TRUST point – if we 100% trust the encryption we can 100% trust blockchain – I guarantee time undermines every encryption based trust point. Because time allows compute capacity to exceed encryption resistance.  Not an issue if the blockchain encryption can be revised/updated – I have looked…no-one has yet shown me how this can be done.  Architecturally blockchain has not facilitated this which means from a security PoV its fundamentally flawed.
c. All blockchains use hashing, a one way encryption technique that is extremely hard to de-encrypt.  So time is on blockchains side, it won’t get hacked anytime soon.  But therein lies its medium to long term issue – if point (b) is not addressed as humans tend to forget that clock is ticking.
d. A dedicated quantum computer (already at 51 qbits) could break this hash today!  if a government body decided to dedicate a quantum computer to it – and why wouldn’t they if economies become based on it?
e. Unfortunately though, some blockchain systems use standard encryption as part of their infrastructure – in which case time is shorter for those block chains to become untrustworthy for some aspect of their service
f. The management of a blockchain always needs a security infrastructure – and as yet every one we have seen is effectively a centralised solution, thus undermining the distributed security value of blockchain, and where blockchains have been hacked to date, this is where the hacks happen, because such solutions are always hackable….refer back to (a) and the hype cycle.
Imagine a world built on blockchain for all transactions and then encryption gets broken – not a world we want to be in when those consequences fall.
That said – for digital business to flourish we need to find ways to trust the security infrastructure and blockchain is the best we have seen so far, but we really want to see the fundamental flaw of unupdateable blockchain hashes to be fixed before betting our customers trust on it while this hype cycle (which creates undue expectations on customers behalf) is in progress.
We could be wrong, we hope we are – but we have spoken with 6 different “experts” on blockchain and not one has yet demonstrated to us that our concern is unfounded.

A Business Philosophy for a Better Internet

When Digital meets Real - Trust is more important

In a world where digital and real-life are every more closely entwined, what is a better Internet? 

Simply put, it’s a place I can go and trust that I can lead a digital life that is as free as my real life is. However our digital freedoms, and consequentially our real world freedoms, are undermined by one simple thing, the commercial. reality of funding the Internet

“We need a new business model for the Internet.”

If we take a small step back in time, a mere 30 years, to the birth of the Internet, visionaries in Silicon Valley and elsewhere saw its awesome potential. But the sheer scale of the opportunity had to be funded somehow. It was recognized data was the key, and personal data in particular could be a new tradable asset. But who would buy it? The simple answer – advertisers. They spent $0.5 Trillion in 2016 and this is expected to be $0.73 Trillion by 2020! Of course this is just the tip of the personal data value iceberg, with Apple and Amazon combining this data value into shopping and device service of equivalent revenue scale. Apple + Amazon + Google + Facebook 2016 revenue, when combined, is greater today than the GDP (Gross Domestic Product) of 88% of countries, that’s more than 176 of the world’s 196 countries! These digital entities dominate in their various spaces and operate near monopolies over our personal data and through that our digital lives and thus our forming digital society. I would argue this advertising (and thus personal profiling based) business model was a necessity 30 years ago, even 20 years ago, because no-one was personally willing to pay directly for services for which they had no comprehension of the value. Yet now we cannot conceive of living our lives without them. Try committing to not use just 2 of these 4 companies products ever again – its virtually impossible for the average person on the street to imagine.

The issue is that these services, and thus our personal data, now power entire economies. Governments are unwilling or unable to challenge these dominants because they’d have to undermine digital service to the voting citizen. In short they are stuck and have to engage in the process of harvesting our personal data, or at least participate indirectly, by directly funding advertising through these entities in order stay in or gain power – the recent debates on Brexit and Trump campaigns, show how paid for digital influence by governments is suborning democracy, raising serious cause for concern for our freedoms as citizens.

The trust-undermining issue with advertising as an Internet funding model is that it’s highly competitive. The only way for one digital agency to succeed over another is to provide greater targeting – which in turn requires ever more detailed profiles of who we are, where we are, when we are there and even better, predictive analytics of what we’ll do next, all obtained using ever more deceitful techniques. Facebook now profiles its users in over 52,000 different ways! The advertising model can only grow by delving ever deeper into our personal data. The digital dominants sustain their position by knowing more about you than anyone else in their sphere of business. This means your social lives (Facebook), your purchasing power/habits (Amazon), your interests (Google search) and your physical movements (Google Android and Apple iOS) and a lot more.

“Advertising drives a commercial and economic need to digitally model every aspect of the real world and our real lives.”

The initial advertising funding model was developed to solve a problem, that problem has now been solved. Everyone ‘gets’ the value of the Internet. So why do we persist in pursuing the same old commercial model when the harms to our freedoms are becoming so apparent? All this data stored in the cloud is fundamentally insecure, no one can assure you or me that it can be maintained for the rest of our lives, yet this is what Apple, Facebook, Amazon and Google would have you believe, despite the daily insecurity evidence to the contrary. Even if it was secured 100%, you and I as citizens have no idea who has it or for what purpose it’s being used, fundamentally undermining our privacy, and through that our very trust in Internet service provision which is demonstrably declining every year.

So what has changed that can allow a new business model to develop? Perversely, it’s the very success of the Internet. Digital is everywhere, in some countries more people have a smartphone than have a toilet or access to running water. Digital has become integral to our daily lives, it’s almost a necessity, certainly for a functioning economically progressive society able to compete in the global market.

In some parts of the world (the EU especially taking a global lead) digital businesses are facing a tsunami of legislation. We at Krowdthink are big fans of the GDPR and have great hopes for the E-Privacy regulation in development, as well as standards like eIDAS and other regulations. Not because regulation is good, but because they are trying to inculcate trust and freedoms into digital society. It has also become necessary to curb the slide towards distrust in digital services, but with a stick not a carrot. Business won’t ‘really’ change unless it sees commercial value and opportunity in changing – there are too many businesses that have faced certain demise if they don’t change and yet persisted with the same old models. In short the Internet itself is ripe for disruption – and disruption won’t come from technology, it’ll come from a change in business model and digital market organization.

I have worked in real-time control systems for over 25 years and what we at Krowdthink see is that the Internet, and indeed the burgeoning Internet of Things, has moved from a transactional IT style infrastructure towards a real-time system of systems. Data is flowing continuously everywhere – personal data is often referred to as our digital exhaust – as we leave trails everywhere we go and in everything we do. But an exhaust is not a left-over to be stored and processed, its live information that can facilitate business opportunity. Through research and real world application we have a fundamental understanding of how people interact. The Internet has given us this understanding. Safely executed research can give us the analytical tools to use data in real-time without the advertisers need to subsequently track and profile all we do.

“What has been missed is the opportunity to build a new business model for the Internet based on real-time information flow. We don’t need to profile, track and model every single person on the Internet in order to deliver them digital value.”

Collating historic profiling data puts people at risk to the nefarious criminal element that are showing their strength vs the weakness of the Internet. The criminals’ goal is access to personal data for subsequent fraudulent activities, or the denial of access to data for service denial. Our data is both the strength and weakness of the Internet. What if we did not collect data? what if we minimized what was stored to almost insignificant levels? What if we built transactional value on real-time data – information volunteered and made available at a point in time in order to gain access to digital value? What if we re-invented existing services using this model? We at Krowdthink believe strongly that many (not all), internet services can be re-invented this way – and by doing so, by making the data attack surface smaller, we will end up with a better more trustworthy Internet, an internet where we’ll all engage more confidently and actually share more.

“Let’s monetise the engagement opportunity – not peoples personal data”

Redefining Trust in Digital Engagement

Krowdthink has spent a long time researching and thinking about how to build a business that is fundamentally trustworthy when delivering a digital service. This blog summarizes our implementation. The simplest expression is our Trust Pyramid. Its articulation has been described by some as an ethical business model, we make no such claim but appreciate the perspective.

trust model


Too many organizations think of trust as a binary condition in which some customers trust the entity and some do not and the aim is to get a higher percentage to state their trust. But such goal setting subverts the whole objective of trust. It becomes a game, and often times, especially in online terms, its less about being worthy of trust and more about gaining trust often using methods that are not worthy of that trust, a tactic that time will always reveal. What’s needed is a strategy for the company that gets stronger as time reveals the companies true activities. Hence fostering a company culture that seeks the unattainable goal of being trustworthy is the only sustainable approach to addressing the online trust deficit built up by 15 years of business models that gamify the customer engagement process.

There are two primary perspectives of trust that differ but share the commonality of customer/user confidence:


An attitude of confident expectation that one’s vulnerabilities will not be exploited


Confidence that the value exchange is fair and equitable and that loss of trust drives an equivalent/proportional consequence on both parties

The issue of consequence will be addressed later, but is a critical component for the development of trustworthy commercial engagement.

Empowering the User/Customer

Trust is obtained through the commitment to empower the customer/user in the engagement process in an open balanced mutually beneficial way. There are three pillars to empowerment that a business must engage in to foster customer/user trust. In the digital context a useful empowerment thought process is to consider all personally identifiable data, whether directly obtained (name, address etc), indirectly obtained meta-data (how a customer engages, when, where etc), or derived (information created through algorithm or correlation with other data sets), as conceptually belonging to the customer/user. The business is merely its temporary custodian. In the EU the GDPR will legally enforce this perspective.


The business must be utterly transparent with its customer/user about what Personally Identified Information (PII) it obtains from a customer/user and how that data is used. It should be open about how PII is secured at rest and on the move.

We suggest publishing all data types relating to PII with simple descriptions of use and how that data is secured.

The GDPR is driving the evolution of new consent models with consent receipts that could eventually replace this publication process.

It is recognized that the majority of users will have limited ability to assess the veracity of this information, but if all businesses followed this approach journalists will get savvy and assessment methods/entities will evolve to determine whether the data processing meets the standards set out in the rest of this model.


The company should maximize the ability of the user/customer to control what PII is shared with whom, whilst ensuring a clarity of understanding of the limitations of purpose such PII is shared for. This is both a UI issue and a UX issue. UX is more important because the assumptive perspective of the customer/user should always be the one that holds true. Control also means the ability to withhold sharing of PII whilst still operating constructively in the digital engagement process. Of course many online services will not function without some level of PII sharing, but innovative thought about the minimization of what’s needed to deliver a capability will help facilitate simple communication of control over the limited amounts of PII are needed to drive the online engagement function.


Many trust papers talk about accountability as the 3rd empowerment strut for trust. However this often defaults to an understanding that it’s accountability to the law. The law is always the lowest possible trust bar; it’s a codification of privacy and security principles that sets the lowest standard for operation. Worse, it is defined mostly by business and their pro-commercial lobbyists, worse still, rarely does the individual have the financial or technical or legal capacity to pursue their rights under the law. Most privacy lawyers have become risk mitigators for their commercial clients.

Truly trustworthy companies seek to empower their customers/users with the ability to take their business to a competitor when/if their trust is lost in their current service provider. This is an extremely scary statement for most commercial entities who primarily seek customer lock-in, whilst not realizing that trust is the best lock-in of all. In fact the act of coerced lock-in is a degradation of trust and is potentially unsustainable for the business.

To get trust one must first trust. Trust your business to do right by its customers, trust your customers to do right by you in return. When you screw up this trust will be rewarded with flexibility and time to address the screw up.

When trust is lost and one party feels they suffered a more significantly consequential loss than the other, then recovery of trust becomes even harder.

The GDPR creates fundamental new rights for customers to both delete their PII and to move it to a competitor anyway. Far better to embrace these legislative requirements as business trust assets than to fight the law. Empowering users with the simple and unobstructed ability to delete and move their data is an empowering trust asset, as the customer now feels they have, individually and en masse, the ability to inflict consequence on a company and thereby ensure a change of operation in their favour. In this hyper-connected world they can anyway, you may as well embrace it as a business asset.

Development Principles

To deliver against the trustworthy aspiration, one has to foster a culture in the company that consistently strengthens the trust model. It has to cover both technology and process in both development and innovation, as well as in the commercial engagement model.


The 7 principles of privacy-by-design call out aspects of the trust model and security model needed, focused on the context of assuring customer privacy in the digital engagement process.

If properly followed you will end up with apps and services that don’t need privacy settings, because each stage of sharing becomes an explicit opt-in. In fact arguably a privacy setting is a privacy fail!

The explicit opt-in thought process, or consent model, is central to achieving PbD. PbD is not about not sharing, its about being in control of what PII is shared with whom for what purpose, endorsed by a confidence that the service provider protects the user/customer from its own or 3rd party inadvertent or deliberate use of their PII outside of their comprehension.


It needs to be recognized that security is not an absolute. It’s an arms race and every digital company will suffer a breach of security if it operates long enough. So it’s critical this objective is seen in the context of the next objective of data minimization. Data minimization is a solid underlying principle of security design. Keep the value of what PII you hold to a minimum, its as much a liability as an asset.

Consider security before writing the 1st line of code of a system. Adding security later is harder and more expensive, in some cases impossible without re-architecting the system. So architect for security don’t just code for it.

All security relies on a trust point, even encryption. So remember every security system is as fallible as the human processes that surround it. Security is thus a process of privileged access management, both in human terms and coding terms. Hence it has to be inculcated into the culture of all aspects of the business.

User Data Minimization

This principle underpins both PbD and SbD. Given no system can be guaranteed to be secure it is incumbent upon a trustworthy entity to minimize the PII they hold so that when a data breach occurs the consequence to the individual is minimized. Time is the greatest exposure tool for every digital system, even encryption gets progressively easier to circumnavigate as time progresses. In SbD it’s a key principle, understanding what data you hold and how it is used is critical to architecting a security system. Less data ensures less complexity in this securitization process and thus less risk of insecurity in the system.

Don’t obtain PII you don’t need, don’t store what you don’t need to, don’t transmit what’s not needed. These simple rules maximize the potential to keep PII private and minimize consequence of a security breach. Properly embraced they lead to innovations that strengthen the trustworthiness of a company.

No Covert Profiling or Tracking

This principle needs to be called out explicitly because it is the default operation of almost all online services today, whether the simple website or the complex app. Technologists who can ‘cleverly’ obtain information about users, where they are, who they are, what they are thinking, their state of mind etc, all without communicating explicitly to the customer/user that is what is happening is central to the digital engagement culture of developers everywhere. Reversing this trend requires an explicit recognition of it and a topline commitment to not operate that way if there is to be any chance of a constructive reversal in the culture of digital engagement models.

Open Business Model

The default business model of the Internet is the fundamental cause of the spiral downwards in trust in digital engagement. The pervasive sentiment that because I go online the normal rules of society no longer apply is driving dishonest engagement, reluctant sharing and active obfuscation. A digital society that drives these norms is not a constructive one, its destructive. It seeks to set new norms. Yet these norms are not fostered in constructive debate, they are covertly forced upon us and then validated when time exposes them, all in the name of commercial endeavor that primarily benefits the four horsemen of Google, Facebook, Apple and Amazon.

The Internet is still in its infancy, and the current business models were needed to enable innovation as no individual consumer was prepared to pay their way directly, so instead payment has become primarily through profiling of people to allow ever greater insights into their needs/wants to better target the adverts or service. This model will continue well through the Internets teenage years and into its young adulthood. However there is clearly a community of users that desire a more constructive digital engagement model, one in which they are respected and thus can trust their suppliers. The only way this can be fostered is to be totally open about the business models of the new breed of businesses. Allow the business model to be debated and validated, only then can trust be fostered.

Most existing online business models can be flipped, offering users/customers new methods to engage in which they are empowered over what is sold to whom for what purpose.

We at Krowdthink are focused on the engagement model, seeking real-time data use to create unidentifiable groups of people in real-time that share common targetable advertising assets such as location, time and contextual interest. We thus make real-time engagement the revenue asset without exposing any PII nor sustaining any profiling history – advertisers target the app not the people, the app groups people in real-time. Advertising value is determined not by their individual profile/preferences, but by their value as a group, or in our case a crowd. The trick in our case is to not need to profile people in order to group them, they group themselves at a point in time, in a place, around a specific interest. We have the added benefit that value is driven to the place owner first as they are co-located with the Krowd and thus have a real-time contextual capability to deliver advertising engagement value, thus opening up a new engagement opportunity for businesses otherwise forced to digitally engage via the personal profiling business model that our temporally and locationally displaced Internet demands.  Managing this advertising process so it does not intrude on the connected experience is as much of a challenge for us in the Krowd app as its is for Facebook or Twitter, however we have a simple ad block tool, an annual subscription to the app can block ads or allow users to tailor and select the ads they want to see.  These business models will be implemented by Krowdthink once we have a sustainably large customer base.  The business model does not intrude on privacy at all nor does it require users to think differently to existing revenue models internet or app users are familiar with, but its implementation balances their interests in privacy with the need for our business to cover costs and make a profit.

A Philosophical View of Digital Society

We often get asked by investors, why don’t you take the easy option, you have a solid idea for localised digital engagement, just get the data, sell it and make us all lots of money.

The short answer comes back to this – do we want to contribute to a digital society where this is the modus operandi? Is this the digital society we want to leave behind for our children?  The short answer is no.

But we have to be realistic – there are two tsunami’s of pressure that make it difficult to stand against this sort of way of commercialisation of online engagement.  The first is that the economic imperative will always pressurise digital societal and individual rights – this is as true for the way governments operate as it is for the individual business.  The law is usually insufficient to defend against this pressure because it is retrospective in the face the rapid change in technology, and it’s written by humans, so its open to interpretation by those with funds to do so, and many citizens don’t have the intellectual or financial capacity to exercise their rights. Commercial and economic pressure pushes hard towards favouring the economic opportunity.

The second issue is that commercial law trumps privacy law. I don’t mean explicitly in law, I mean in terms of implementation.  Every director of every company has an overriding imperative to return a profit or financial return for its shareholders.  In digital society personal data is the currency traded to deliver on this imperative.  The law constrains some uses of this data, but the insignificant fines and the weaknesses in the enforcements structures nationally and especially internationally, means that personal privacy has become a cost of doing digital business.

But consumers are starting to become informed about how they are traded online and there is a growing resistance to it.  It’s mostly reflected in the media as security breaches, big businesses are aware its becoming a trust issue.  But there is a big change coming – the General Data Protection Regulation is seeking to get ahead of the laws traditionally retrospective position in digital society.  There are real consumer powers in it and massive fines for those businesses that do not comply.

The innovation opportunity is not to just embrace the regulation as a positive construct, its to recognise that the commercial status quo of digital engagement is about to change, and that change can be accelerated by innovators embracing the underlying principles of privacy en route to constructing a better digital society.  In short to compete on the basis of privacy value and trust in digital engagement.  We can disrupt if we do this.

The only answer to the two commercial pressures outlined previously is to make privacy an asset worth more than covert trade in our personal digital assets.  When that happens governments and businesses will combine to create a better digital society for our children.  This is what we strive for.  It’s why I am driving the creation of an event called Privacy: The Competitive Advantage. Hope to see you there.



Privacy Settings are a Privacy Failure

The EU GDPR (General Data Protection Regulation) being written into statute this month explicitly calls for those storing or operating on personal data to follow the 7 principles of Privacy by Design.  The 2nd principle of which is “Privacy as the Default Setting”.

If you follow the simple logic that all operations on your or my personal data are private by default, then really, there is no need for privacy settings – none. In fact the number and complexity of privacy settings can be directly correlated to the inherent lack of privacy in the platform or product you are using, generally driven by the platform providers business model of the monetisation of you.

As an app developer who fully embraces these principles, it is notable that our Krowd app has no privacy settings function in the app.  By starting with respect for peoples data such that we treat it as if owned by the individual, which means maintaining provenance of all data and meta-data and derived (analytic) data, then every share has to become an explicit opt-in decision by the user, plus the app interface should make it clear what is being communicated with whom for what purpose. This is the essence of privacy.  Privacy is a function of control of what is shared with whom and why, it is not a lack of sharing.

Maintaining provenance also allows us to follow another GDPR principle – the right to delete.  Something incumbent platform providers will find almost impossible to implement in principle without having tracked provenance.

When the business model of a social platform, like Facebook, is to monetise who you are, then they have to start with the basic assumption that everything you share belongs to them not to you, as does how and when you share (meta data) and what information can be derived (through analytics) from the aggregate of all this data. Hence Facebook’s privacy policy makes it clear, all your data belongs to them.  They use privacy settings as a means to tick legal box requirements and to give a limited level of control over some of the data (none of the meta data or derived data, something that will get challenged in the future we suspect) back to the individual. On the flip side this also means that there is literally an infinite number of ways they might use your data in a manner that you may feel breaches your privacy.  Hence their platform, challenged by the latest GDPR legislation, ends up with an ever increasingly complex set of privacy settings – a list that will only get more complex over time, eventually (if it has not already) defeating the very objective of user empowerment with control over the use of ‘their’ data through those very settings.

Wi-Fi Location Privacy as a Commercial Asset


The Big Data mentality that pervades almost all Internet deployment technologies, services and apps, tends to think of location data as the diamond in the Big Data treasure chest. It is the most insightful of data. It can be used to indirectly determine interest in things around people, it can discern who are friends or colleagues even if they never connect digitally, it can be used to spark events such as the making of a specific coffee order for a customer as they enter the café.

When aggregated and correlated with other data it can ultimately define, in depth, who someone is. Best of all – it requires no user input. It’s a passive monitoring facility. So why is it that despite smartphones having GPS for over 15 years we are not seeing widespread use of the data? except in mapping/routing services and of course Uber – although they quickly got in trouble for not being careful enough with it.

Part of the answer is of course legislative, at least in Europe. The mobile operators have been tracking our smartphone location for years, they even commonly get us suckers consumers to tick a consent box when we get our sims, it allow them to use this data for whatever commercial purpose they like. But they are very careful how they use it. They understand, much like Uber did not, that unfettered access and careless use of such data is a potential nuclear bomb for their brands.

But the real answer is the simple observation that users don’t or won’t opt-in to location tracking unless they see an unambiguous immediate benefit. In short it creeps them out. They may not understand how the meta-data of their myriad online interactions profiles them, but there is an instinctive awareness that having your location tracked provides insights they’d rather not share.

It’s a trust issue, just like the Uber issue became. In 2013 only 11% of mobile app users stated they’d be willing to share their location data in a mobile app. That percentage may even be slipping to a lower % today as users become more informed of how they are being tracked and profiled.

So what’s the response of business? In general, to try and collate the data indirectly. For example 59% of retail fashion stores in the UK now use facial recognition cameras to track shopper movements. Is this legal? Very doubtful, but currently untested. The main protection is to claim it’s done anonymously. That’s hard to do with something as specific as a facial image definition! In fact it’s mathematically arguable that there is no such thing as anonymity in Big Data.


It’s why the ICO’s around Europe carefully use the ‘best efforts’ clause to interpret anonymous data control. But what does that mean? Again untested in law. So in short, companies play fast and lose with this data and pray they are not the ones to get caught. But there is a big legislative change coming.

The root and branch revision of the Data Protection Act (the GDPR – General Data Protection Regulation) is due to come into legislation in 2016. Not only is it a tighter definition of privacy, updated to deal with modern tech capability, it raises the bar of fines from a few 100K Euros, to 10’s of Millions of Euros or more! Privacy is no longer something that can be dismissed as a cost of doing business.

So how do we unlock Wi-Fi based location value? The short answer is that what Wi-Fi can do without any knowledge of location is to co-locate people. Creating the opportunity to bring the cloud down to the crowd, delivering localised digital engagement in the context of location-private solutions.

But tech alone does not unlock the location value proposition – what’s also needed is an engagement model designed to engender user trust in the service provider. By gaining user trust we can foster localized engagement and through that unlock, via opt-in mechanisms, localized commercial value.

To that end Krowdthink has spent years researching and evolving a trust model. We will present and debate our trust Pyramid when we formally launch.  But we are already seeing institutions like the Mobile Ecosystem Forum start to try and define a trust model, the UK Digital Catapult has a Privacy and Trust initiative that will soon birth their methodology for trusted digital engagement.

Privacy, placed in a trustworthy engagement model, will become the next commercial value proposition for businesses.

Privacy as a Commercial Asset

Lets start by making a fundamental statement of belief – The current state of the Internet with regards to the commercialisation of our data and every online interaction, especially including sensor-based data collation as defined mostly by the IoT and smartphones, will not abate until we find a commercial model for making delivery of privacy a commercial asset.

Arguably this is precisely what Tim Cook is doing at Apple as he starts to play the privacy card strategically.  He fundamentally points to an age old truism that without privacy we cannot have freedom. However Apple makes money by selling consumer goods.  So for him privacy is a competitive differential against Google, who’s whole business model for almost everything it does is to collect and sell our data, to profile us with ever increasing accuracy and sell that data indirectly for ever increasing software value delivery. Apple still collects as much data as they can, asking for the implicit trust of its users that they will keep it secure. So are they any better than Google?

As any half competent technologist will tell you, it’s impossible to secure all that data online, especially if that data exists for what is likely to be a whole lifetime.  It can, and almost certainly will, leak out.  The recent spate of highly publicised hacks like Ashley Madison is just the tip of a huge iceberg of security breaches that the technorati have known about for a long time. Even Apple’s iCloud was hacked although whether that was a failing of their systems or user error is hard to confirm, but either way the data was breached.

So building on the strategic play of Apple, while mitigating the security concern, becomes something any company could potentially leverage.  As consumers come to understand the potential cost of allowing their data to be obtained and used by any online commercial entity for any purpose, they will desire to claw back control, as they suffer consequences of data loss they’ll move from desire to need.  But studies show consumers don’t change their behaviour much even after a consequence is visited upon them…why?  because there is no choice, except through a deep understanding of the tech and how to employ personal protection tools, a barrier too high for 90%+ of consumers…yet another disempowerment issue.

Control is the key enabler of privacy, control based on an understanding of what data is held by whom and for what purpose is the essence of a privacy platform.  These are empowerment issues.  Today the commercial entity is empowered through our data to be in control.  We have to trust them, we have no choice.  But what if we could chose to trust an entity because they seek to empower us as consumers?  That’s a different thought process.

Offering competitive choice is the essence of Krowdthink’s Trust pyramid. in a future blog we’ll discuss how to make revenue from this trust model.  We say now – it’s not a replacement for existing models, the Internet as it is will exist for years to come, but we can re-invent existing products with a Trust based alternative and monetise them in subtly different ways.  We can also tap into market sectors previously unaccessible due to consumer trust/privacy concerns.