Skip to content

Redefining Trust in Digital Engagement

Krowdthink has spent a long time researching and thinking about how to build a business that is fundamentally trustworthy when delivering a digital service. This blog summarizes our implementation. The simplest expression is our Trust Pyramid. Its articulation has been described by some as an ethical business model, we make no such claim but appreciate the perspective.

trust model


Too many organizations think of trust as a binary condition in which some customers trust the entity and some do not and the aim is to get a higher percentage to state their trust. But such goal setting subverts the whole objective of trust. It becomes a game, and often times, especially in online terms, its less about being worthy of trust and more about gaining trust often using methods that are not worthy of that trust, a tactic that time will always reveal. What’s needed is a strategy for the company that gets stronger as time reveals the companies true activities. Hence fostering a company culture that seeks the unattainable goal of being trustworthy is the only sustainable approach to addressing the online trust deficit built up by 15 years of business models that gamify the customer engagement process.

There are two primary perspectives of trust that differ but share the commonality of customer/user confidence:


An attitude of confident expectation that one’s vulnerabilities will not be exploited


Confidence that the value exchange is fair and equitable and that loss of trust drives an equivalent/proportional consequence on both parties

The issue of consequence will be addressed later, but is a critical component for the development of trustworthy commercial engagement.

Empowering the User/Customer

Trust is obtained through the commitment to empower the customer/user in the engagement process in an open balanced mutually beneficial way. There are three pillars to empowerment that a business must engage in to foster customer/user trust. In the digital context a useful empowerment thought process is to consider all personally identifiable data, whether directly obtained (name, address etc), indirectly obtained meta-data (how a customer engages, when, where etc), or derived (information created through algorithm or correlation with other data sets), as conceptually belonging to the customer/user. The business is merely its temporary custodian. In the EU the GDPR will legally enforce this perspective.


The business must be utterly transparent with its customer/user about what Personally Identified Information (PII) it obtains from a customer/user and how that data is used. It should be open about how PII is secured at rest and on the move.

We suggest publishing all data types relating to PII with simple descriptions of use and how that data is secured.

The GDPR is driving the evolution of new consent models with consent receipts that could eventually replace this publication process.

It is recognized that the majority of users will have limited ability to assess the veracity of this information, but if all businesses followed this approach journalists will get savvy and assessment methods/entities will evolve to determine whether the data processing meets the standards set out in the rest of this model.


The company should maximize the ability of the user/customer to control what PII is shared with whom, whilst ensuring a clarity of understanding of the limitations of purpose such PII is shared for. This is both a UI issue and a UX issue. UX is more important because the assumptive perspective of the customer/user should always be the one that holds true. Control also means the ability to withhold sharing of PII whilst still operating constructively in the digital engagement process. Of course many online services will not function without some level of PII sharing, but innovative thought about the minimization of what’s needed to deliver a capability will help facilitate simple communication of control over the limited amounts of PII are needed to drive the online engagement function.


Many trust papers talk about accountability as the 3rd empowerment strut for trust. However this often defaults to an understanding that it’s accountability to the law. The law is always the lowest possible trust bar; it’s a codification of privacy and security principles that sets the lowest standard for operation. Worse, it is defined mostly by business and their pro-commercial lobbyists, worse still, rarely does the individual have the financial or technical or legal capacity to pursue their rights under the law. Most privacy lawyers have become risk mitigators for their commercial clients.

Truly trustworthy companies seek to empower their customers/users with the ability to take their business to a competitor when/if their trust is lost in their current service provider. This is an extremely scary statement for most commercial entities who primarily seek customer lock-in, whilst not realizing that trust is the best lock-in of all. In fact the act of coerced lock-in is a degradation of trust and is potentially unsustainable for the business.

To get trust one must first trust. Trust your business to do right by its customers, trust your customers to do right by you in return. When you screw up this trust will be rewarded with flexibility and time to address the screw up.

When trust is lost and one party feels they suffered a more significantly consequential loss than the other, then recovery of trust becomes even harder.

The GDPR creates fundamental new rights for customers to both delete their PII and to move it to a competitor anyway. Far better to embrace these legislative requirements as business trust assets than to fight the law. Empowering users with the simple and unobstructed ability to delete and move their data is an empowering trust asset, as the customer now feels they have, individually and en masse, the ability to inflict consequence on a company and thereby ensure a change of operation in their favour. In this hyper-connected world they can anyway, you may as well embrace it as a business asset.

Development Principles

To deliver against the trustworthy aspiration, one has to foster a culture in the company that consistently strengthens the trust model. It has to cover both technology and process in both development and innovation, as well as in the commercial engagement model.


The 7 principles of privacy-by-design call out aspects of the trust model and security model needed, focused on the context of assuring customer privacy in the digital engagement process.

If properly followed you will end up with apps and services that don’t need privacy settings, because each stage of sharing becomes an explicit opt-in. In fact arguably a privacy setting is a privacy fail!

The explicit opt-in thought process, or consent model, is central to achieving PbD. PbD is not about not sharing, its about being in control of what PII is shared with whom for what purpose, endorsed by a confidence that the service provider protects the user/customer from its own or 3rd party inadvertent or deliberate use of their PII outside of their comprehension.


It needs to be recognized that security is not an absolute. It’s an arms race and every digital company will suffer a breach of security if it operates long enough. So it’s critical this objective is seen in the context of the next objective of data minimization. Data minimization is a solid underlying principle of security design. Keep the value of what PII you hold to a minimum, its as much a liability as an asset.

Consider security before writing the 1st line of code of a system. Adding security later is harder and more expensive, in some cases impossible without re-architecting the system. So architect for security don’t just code for it.

All security relies on a trust point, even encryption. So remember every security system is as fallible as the human processes that surround it. Security is thus a process of privileged access management, both in human terms and coding terms. Hence it has to be inculcated into the culture of all aspects of the business.

User Data Minimization

This principle underpins both PbD and SbD. Given no system can be guaranteed to be secure it is incumbent upon a trustworthy entity to minimize the PII they hold so that when a data breach occurs the consequence to the individual is minimized. Time is the greatest exposure tool for every digital system, even encryption gets progressively easier to circumnavigate as time progresses. In SbD it’s a key principle, understanding what data you hold and how it is used is critical to architecting a security system. Less data ensures less complexity in this securitization process and thus less risk of insecurity in the system.

Don’t obtain PII you don’t need, don’t store what you don’t need to, don’t transmit what’s not needed. These simple rules maximize the potential to keep PII private and minimize consequence of a security breach. Properly embraced they lead to innovations that strengthen the trustworthiness of a company.

No Covert Profiling or Tracking

This principle needs to be called out explicitly because it is the default operation of almost all online services today, whether the simple website or the complex app. Technologists who can ‘cleverly’ obtain information about users, where they are, who they are, what they are thinking, their state of mind etc, all without communicating explicitly to the customer/user that is what is happening is central to the digital engagement culture of developers everywhere. Reversing this trend requires an explicit recognition of it and a topline commitment to not operate that way if there is to be any chance of a constructive reversal in the culture of digital engagement models.

Open Business Model

The default business model of the Internet is the fundamental cause of the spiral downwards in trust in digital engagement. The pervasive sentiment that because I go online the normal rules of society no longer apply is driving dishonest engagement, reluctant sharing and active obfuscation. A digital society that drives these norms is not a constructive one, its destructive. It seeks to set new norms. Yet these norms are not fostered in constructive debate, they are covertly forced upon us and then validated when time exposes them, all in the name of commercial endeavor that primarily benefits the four horsemen of Google, Facebook, Apple and Amazon.

The Internet is still in its infancy, and the current business models were needed to enable innovation as no individual consumer was prepared to pay their way directly, so instead payment has become primarily through profiling of people to allow ever greater insights into their needs/wants to better target the adverts or service. This model will continue well through the Internets teenage years and into its young adulthood. However there is clearly a community of users that desire a more constructive digital engagement model, one in which they are respected and thus can trust their suppliers. The only way this can be fostered is to be totally open about the business models of the new breed of businesses. Allow the business model to be debated and validated, only then can trust be fostered.

Most existing online business models can be flipped, offering users/customers new methods to engage in which they are empowered over what is sold to whom for what purpose.

We at Krowdthink are focused on the engagement model, seeking real-time data use to create unidentifiable groups of people in real-time that share common targetable advertising assets such as location, time and contextual interest. We thus make real-time engagement the revenue asset without exposing any PII nor sustaining any profiling history – advertisers target the app not the people, the app groups people in real-time. Advertising value is determined not by their individual profile/preferences, but by their value as a group, or in our case a crowd. The trick in our case is to not need to profile people in order to group them, they group themselves at a point in time, in a place, around a specific interest. We have the added benefit that value is driven to the place owner first as they are co-located with the Krowd and thus have a real-time contextual capability to deliver advertising engagement value, thus opening up a new engagement opportunity for businesses otherwise forced to digitally engage via the personal profiling business model that our temporally and locationally displaced Internet demands.  Managing this advertising process so it does not intrude on the connected experience is as much of a challenge for us in the Krowd app as its is for Facebook or Twitter, however we have a simple ad block tool, an annual subscription to the app can block ads or allow users to tailor and select the ads they want to see.  These business models will be implemented by Krowdthink once we have a sustainably large customer base.  The business model does not intrude on privacy at all nor does it require users to think differently to existing revenue models internet or app users are familiar with, but its implementation balances their interests in privacy with the need for our business to cover costs and make a profit.

A Philosophical View of Digital Society

We often get asked by investors, why don’t you take the easy option, you have a solid idea for localised digital engagement, just get the data, sell it and make us all lots of money.

The short answer comes back to this – do we want to contribute to a digital society where this is the modus operandi? Is this the digital society we want to leave behind for our children?  The short answer is no.

But we have to be realistic – there are two tsunami’s of pressure that make it difficult to stand against this sort of way of commercialisation of online engagement.  The first is that the economic imperative will always pressurise digital societal and individual rights – this is as true for the way governments operate as it is for the individual business.  The law is usually insufficient to defend against this pressure because it is retrospective in the face the rapid change in technology, and it’s written by humans, so its open to interpretation by those with funds to do so, and many citizens don’t have the intellectual or financial capacity to exercise their rights. Commercial and economic pressure pushes hard towards favouring the economic opportunity.

The second issue is that commercial law trumps privacy law. I don’t mean explicitly in law, I mean in terms of implementation.  Every director of every company has an overriding imperative to return a profit or financial return for its shareholders.  In digital society personal data is the currency traded to deliver on this imperative.  The law constrains some uses of this data, but the insignificant fines and the weaknesses in the enforcements structures nationally and especially internationally, means that personal privacy has become a cost of doing digital business.

But consumers are starting to become informed about how they are traded online and there is a growing resistance to it.  It’s mostly reflected in the media as security breaches, big businesses are aware its becoming a trust issue.  But there is a big change coming – the General Data Protection Regulation is seeking to get ahead of the laws traditionally retrospective position in digital society.  There are real consumer powers in it and massive fines for those businesses that do not comply.

The innovation opportunity is not to just embrace the regulation as a positive construct, its to recognise that the commercial status quo of digital engagement is about to change, and that change can be accelerated by innovators embracing the underlying principles of privacy en route to constructing a better digital society.  In short to compete on the basis of privacy value and trust in digital engagement.  We can disrupt if we do this.

The only answer to the two commercial pressures outlined previously is to make privacy an asset worth more than covert trade in our personal digital assets.  When that happens governments and businesses will combine to create a better digital society for our children.  This is what we strive for.  It’s why I am driving the creation of an event called Privacy: The Competitive Advantage. Hope to see you there.



Privacy Settings are a Privacy Failure

The EU GDPR (General Data Protection Regulation) being written into statute this month explicitly calls for those storing or operating on personal data to follow the 7 principles of Privacy by Design.  The 2nd principle of which is “Privacy as the Default Setting”.

If you follow the simple logic that all operations on your or my personal data are private by default, then really, there is no need for privacy settings – none. In fact the number and complexity of privacy settings can be directly correlated to the inherent lack of privacy in the platform or product you are using, generally driven by the platform providers business model of the monetisation of you.

As an app developer who fully embraces these principles, it is notable that our Krowd app has no privacy settings function in the app.  By starting with respect for peoples data such that we treat it as if owned by the individual, which means maintaining provenance of all data and meta-data and derived (analytic) data, then every share has to become an explicit opt-in decision by the user, plus the app interface should make it clear what is being communicated with whom for what purpose. This is the essence of privacy.  Privacy is a function of control of what is shared with whom and why, it is not a lack of sharing.

Maintaining provenance also allows us to follow another GDPR principle – the right to delete.  Something incumbent platform providers will find almost impossible to implement in principle without having tracked provenance.

When the business model of a social platform, like Facebook, is to monetise who you are, then they have to start with the basic assumption that everything you share belongs to them not to you, as does how and when you share (meta data) and what information can be derived (through analytics) from the aggregate of all this data. Hence Facebook’s privacy policy makes it clear, all your data belongs to them.  They use privacy settings as a means to tick legal box requirements and to give a limited level of control over some of the data (none of the meta data or derived data, something that will get challenged in the future we suspect) back to the individual. On the flip side this also means that there is literally an infinite number of ways they might use your data in a manner that you may feel breaches your privacy.  Hence their platform, challenged by the latest GDPR legislation, ends up with an ever increasingly complex set of privacy settings – a list that will only get more complex over time, eventually (if it has not already) defeating the very objective of user empowerment with control over the use of ‘their’ data through those very settings.

Wi-Fi Location Privacy as a Commercial Asset


The Big Data mentality that pervades almost all Internet deployment technologies, services and apps, tends to think of location data as the diamond in the Big Data treasure chest. It is the most insightful of data. It can be used to indirectly determine interest in things around people, it can discern who are friends or colleagues even if they never connect digitally, it can be used to spark events such as the making of a specific coffee order for a customer as they enter the café.

When aggregated and correlated with other data it can ultimately define, in depth, who someone is. Best of all – it requires no user input. It’s a passive monitoring facility. So why is it that despite smartphones having GPS for over 15 years we are not seeing widespread use of the data? except in mapping/routing services and of course Uber – although they quickly got in trouble for not being careful enough with it.

Part of the answer is of course legislative, at least in Europe. The mobile operators have been tracking our smartphone location for years, they even commonly get us suckers consumers to tick a consent box when we get our sims, it allow them to use this data for whatever commercial purpose they like. But they are very careful how they use it. They understand, much like Uber did not, that unfettered access and careless use of such data is a potential nuclear bomb for their brands.

But the real answer is the simple observation that users don’t or won’t opt-in to location tracking unless they see an unambiguous immediate benefit. In short it creeps them out. They may not understand how the meta-data of their myriad online interactions profiles them, but there is an instinctive awareness that having your location tracked provides insights they’d rather not share.

It’s a trust issue, just like the Uber issue became. In 2013 only 11% of mobile app users stated they’d be willing to share their location data in a mobile app. That percentage may even be slipping to a lower % today as users become more informed of how they are being tracked and profiled.

So what’s the response of business? In general, to try and collate the data indirectly. For example 59% of retail fashion stores in the UK now use facial recognition cameras to track shopper movements. Is this legal? Very doubtful, but currently untested. The main protection is to claim it’s done anonymously. That’s hard to do with something as specific as a facial image definition! In fact it’s mathematically arguable that there is no such thing as anonymity in Big Data.


It’s why the ICO’s around Europe carefully use the ‘best efforts’ clause to interpret anonymous data control. But what does that mean? Again untested in law. So in short, companies play fast and lose with this data and pray they are not the ones to get caught. But there is a big legislative change coming.

The root and branch revision of the Data Protection Act (the GDPR – General Data Protection Regulation) is due to come into legislation in 2016. Not only is it a tighter definition of privacy, updated to deal with modern tech capability, it raises the bar of fines from a few 100K Euros, to 10’s of Millions of Euros or more! Privacy is no longer something that can be dismissed as a cost of doing business.

So how do we unlock Wi-Fi based location value? The short answer is that what Wi-Fi can do without any knowledge of location is to co-locate people. Creating the opportunity to bring the cloud down to the crowd, delivering localised digital engagement in the context of location-private solutions.

But tech alone does not unlock the location value proposition – what’s also needed is an engagement model designed to engender user trust in the service provider. By gaining user trust we can foster localized engagement and through that unlock, via opt-in mechanisms, localized commercial value.

To that end Krowdthink has spent years researching and evolving a trust model. We will present and debate our trust Pyramid when we formally launch.  But we are already seeing institutions like the Mobile Ecosystem Forum start to try and define a trust model, the UK Digital Catapult has a Privacy and Trust initiative that will soon birth their methodology for trusted digital engagement.

Privacy, placed in a trustworthy engagement model, will become the next commercial value proposition for businesses.

Privacy as a Commercial Asset

Lets start by making a fundamental statement of belief – The current state of the Internet with regards to the commercialisation of our data and every online interaction, especially including sensor-based data collation as defined mostly by the IoT and smartphones, will not abate until we find a commercial model for making delivery of privacy a commercial asset.

Arguably this is precisely what Tim Cook is doing at Apple as he starts to play the privacy card strategically.  He fundamentally points to an age old truism that without privacy we cannot have freedom. However Apple makes money by selling consumer goods.  So for him privacy is a competitive differential against Google, who’s whole business model for almost everything it does is to collect and sell our data, to profile us with ever increasing accuracy and sell that data indirectly for ever increasing software value delivery. Apple still collects as much data as they can, asking for the implicit trust of its users that they will keep it secure. So are they any better than Google?

As any half competent technologist will tell you, it’s impossible to secure all that data online, especially if that data exists for what is likely to be a whole lifetime.  It can, and almost certainly will, leak out.  The recent spate of highly publicised hacks like Ashley Madison is just the tip of a huge iceberg of security breaches that the technorati have known about for a long time. Even Apple’s iCloud was hacked although whether that was a failing of their systems or user error is hard to confirm, but either way the data was breached.

So building on the strategic play of Apple, while mitigating the security concern, becomes something any company could potentially leverage.  As consumers come to understand the potential cost of allowing their data to be obtained and used by any online commercial entity for any purpose, they will desire to claw back control, as they suffer consequences of data loss they’ll move from desire to need.  But studies show consumers don’t change their behaviour much even after a consequence is visited upon them…why?  because there is no choice, except through a deep understanding of the tech and how to employ personal protection tools, a barrier too high for 90%+ of consumers…yet another disempowerment issue.

Control is the key enabler of privacy, control based on an understanding of what data is held by whom and for what purpose is the essence of a privacy platform.  These are empowerment issues.  Today the commercial entity is empowered through our data to be in control.  We have to trust them, we have no choice.  But what if we could chose to trust an entity because they seek to empower us as consumers?  That’s a different thought process.

Offering competitive choice is the essence of Krowdthink’s Trust pyramid. in a future blog we’ll discuss how to make revenue from this trust model.  We say now – it’s not a replacement for existing models, the Internet as it is will exist for years to come, but we can re-invent existing products with a Trust based alternative and monetise them in subtly different ways.  We can also tap into market sectors previously unaccessible due to consumer trust/privacy concerns.

Facebook, Social Networks and the Need for RIPA Authorisations

This is an excellent summary of the connection between the Data Protection Act and RIPA (Regulatory Investigative Powers Act).
Note that I, and many like me believe RIPA should be repealed as a piece of legislation that fundamentally undermines our human rights. However at least there are some positive elements that can be taken out as seen here. A lot is to do with interpretation and emphasis which this article highlights.

Blog Now

canstockphoto12584745Increasingly local authorities are turning to the online world, especially social media, when conducting investigations. There is some confusion as to whether the viewing of suspects’ Facebook accounts and other social networks requires an authorisation under Part 2 of the Regulation of Investigatory Powers Act 2000 (RIPA). In his latest annual report the Chief Surveillance Commissioner states (paragraph 5.42):

“Perhaps more than ever, public authorities now make use of the wide availability of details about individuals, groups or locations that are provided on social networking sites and a myriad of other means of open communication between people using the Internet and their mobile communication devices. I repeat my view that just because this material is out in the open, does not render it fair game. The Surveillance Commissioners have provided guidance that certain activities will require authorisation under RIPA or RIP(S)A and this includes repetitive viewing of what are…

View original post 1,108 more words

The Connection between Trust and Privacy in Social Networking

At Krowdthink we have spent a long time trying to determine the answer to this question.  The answer is that no-one should really trust any online service completely because no-one can guarantee security of your data.  However that does not undermine the value in building a company and product that aspires to being trusted.  Trust is the missing component in our online engagements today – especially in social networking.  Is our social network persona who we are in real life?  Of course not – in the same way who I present myself as down the pub having a beer differs to whom I present myself as at work.  But social networks in particular are building profiles of us that go deeper than what we present and are capable of determining our psyche over time, really determining who we are – empowering the commercial entity behind the social network with valuable insights to sell on – usually via advertising – except as soon as we click an advert we have confirmed that we meet that profile.  That’s scary….especially as we don’t know who received that insight about us nor how they plan to use it.  Lets not also forget those profiling us cannot guarantee the security of the data being held on us either. There is good reason why a hacked Facebook account sells for between 3 and 6 times more than a hacked bank account.

So what’s the role of privacy in achieving a potentially trustworthy (note not trusted) social network?  To answer that you have to get into how privacy is managed in our daily lives, online or offline.  It is ultimately about control.  Control of what information I share with whom, when and where. Its an understanding that those with whom I share information can in turn be trusted and to what extent – and its also about knowing I can visit a recourse (tell them off, dismiss them as a friend etc) upon someone who violates those implicit bounds of trust that were given when information was shared, with both parties knowing this and consequence for both then motivation is in place to ensure appropriate use of information shared.  In social networking terms – the right to delete is that power of recourse, or in other words a mean to remedy when data shared is used inappropriately by the social network service provider (data = profit for the social network business models du jour), or that I wish to change that information posted because it no longer reflects who I am.

But there are other issues – the issue of meta-data is the main one – when engaging online I leave clues as to who I am that have little to do with the content I post.  When I connect, with whom, how often, who else is involved in the conversation etc etc.  All this provides insights if its recorded, insights we are somewhat unaware are being collected.  So a trustworthy social network would minimise this information.  In fact in general data minimisation is the only defence against the hacker – store the least data needed to deliver the service to the end user. Make other social networks more interesting targets. Basically make the security walls high and the value of whats on the other side as low as possible.

In taking our social network into locations, we push the boundaries of what people will entrust to the social network service provider.  In places the digital connection is more real – and because of that more private than the virtual cloud world most social networks live in.  It is thus incumbent on any localised social networking service provider to balance the equation of trust though greater efforts to be worthy of that trust.

There is more to this trust model though – see our Trust Pyramid here . For more insight listen to the Privacy Piracy Interview with myself on KUCI radio ( 20th April 8am PDT (USA), 4pm BST (UK) and 5pm CET (Europe).  KUCI will also make the interview available as a podcast after the event.