Skip to content

Bringing Digital & Place Together – The Importance of Community

Our post Covid world will be different, but no-one really knows how. Yet there is a general consensus that digital will play an increasingly important role. People will travel for work less and use video conferencing and messaging systems more. Local businesses are increasingly going online, whether it’s for order management, new business reach or to enable socially-distanced local customer support. 

Digital was also being ‘blamed’ for the hollowing out of the retail sector and the destruction of the high street and town centres before Covid came along.  It is certainly an agent of change and Covid is an economic accelerant adding fuel to the destructive fire that digital partially lit.

In responding to Covid, people have reached for the familiar and used their social platforms to replace an increasingly greater part of their social interactions, and businesses are following hoping to more tightly knit themselves into those groups.  But are these social platforms adding more fuel to the destructive fire? They pull people away from place and endorse a concept of a virtualised world, where people are just digital avatars and consumers of transactional opportunity, and yet…

People make Places

So we have a conundrum – digital connects the people, yet pulls them away from place – the internet business model specifically turns people into global consumers able to access anything anywhere.

So is digital part of the retail and town centre problem?  Obviously yes, but could it be part of the solution too? To answer that question we have to look at what digital, and specifically existing social platforms, is replacing in the real world.

We believe the answer is community.  So what is community? And what do we risk losing if we rush to current social platforms to replace it? The wiki definition states “A community is a social unit (a group of living things) with commonality such as norms, religion, values, customs, or identity. Communities may share a sense of place situated in a given geographical area (e.g. a country, village, town, or neighbourhood) or in virtual space through communication platforms.”  It is the idea that a community can be virtual that leads people to assume they can just shift to online without consequence.  But reading the full definition highlights a critical component: Some “communities may share a sense of place”.

When virtual communities cannot ensure visitors/participants are from the place, you immediately lose the concept of a sense of place.

In other words we lose the real world connection.  When we meet people in the real world we know people are at least interested enough to visit, either due to tourist attractions, commercial opportunity or friend/family connections, or they live here.  While existing social platforms are good at getting people to a place (used as marketing channels), we should question if these platforms are appropriate for sustaining the connection to place and community. So a post Covid world that needs more digital capability to assist in recovery needs to consider how place and community can be sustained or even enhanced digitally.  To do this we must look at the values community supports and seek to explicitly support them digitally.

Social Norms

Again, Wiki precisely summarises things; “Social norms are regarded as collective representations of acceptable group conduct as well as individual perceptions of particular group conduct.[1] They can be viewed as cultural products (including values, customs, and traditions)[2] which represent individuals’ basic knowledge of what others do and think that they should do.[3] From a sociological perspective, social norms are informal understandings that govern the behaviour of members of a society.[4]

This definition neatly skewers why platforms like Twitter and Facebook fail to represent place in community.  Both are actively seeking to moderate content to ‘social norms’, but these are effectively dictated by law; large scale issues, largely of their own making, such as fake news, trolling and hate speech.  They are totally incapable of representing the nuanced values of place or community because they are not of the place or community, plus there is no way to ensure participants are of the place or community other than tracking location and where you reside – something people generally understand is an insightful piece of data best kept to oneself.

Without the context of place social norms that underpin community cannot thrive

This implies that an appropriate degree of oversight or orchestration of the digital community needs to be from within the community, otherwise instinctive social norms cannot be represented or upheld. Localisation of digital community management is critical to sustain social norms – such management usually manifests in terms of services that the community feels are needed or are appropriate – they tend to be about real world effects, not digital ones, such as how safe do I feel? Or how clean is the space?, or how vibrant and engaging it is, am “I”cared for?  The social norms for a shopping mall or zoo should probably be managed by the mall/zoo operators, but the social norms for a high street should probably reside with a BID (Business Improvement District)  or a local council, yet the acceptable norms for a café should be definable by the café owner should they so wish to assert such norms for the digital space that represents the community (customers) of their physical place. 

We need to shift empowerment over the digital community platform to the local level

Yet at the same time, we need to enable a hierarchy of communities that make up wider communities – with the café being part of a mall or high street community, which may be part of a wider town or city community.  By enabling such a hierarchy, we also introduce another critically important element largely missing from social platforms like Facebook/Twitter – competition.  The better a community orchestrates and supports the norms of its community the more likely people of the community will participate.  If the mall does a poor job of this but the high street community does it well, then the people of that place will engage more with the high street community. Digital can and should be part of making a place feel vibrant, well serviced and engaging, but it needs to integrate the context of place and empower localised oversight.

Identity and Community

One of the big management issues for global social platforms is identity. Most insist on some level of identity, with Facebook making it clear your full identity is required. But what identity is required for community?  We would suggest that the only identity requirements should be that you are uniquely represented (i.e. your digital name is unique to you as a user) and that you are ‘of the place’.

By being of the place you have authority to speak in the place.

In this context a digital community platform is more like Twitter with respect to identity.

Of course, platforms like Facebook and Twitter closely monitor all your posts to collate more information on who you are and how you think, including who you connect with.  Again a community platform does not need to do this, nor should.  It does not need to know more than you want to explicitly share for the purpose of defining your role in a community and who you are in a community.  Ad targeting is not needed based on your interests.  Your choice of communities to connect and engage with is all the ‘profiling’ needed. If such marketing support is sought it can target the community, not the individual.  However we should allow people to extend their visible profile explicitly so that more granular understanding of who you are can be projected in a community and discovered – but that profile is yours to manage and control and if needed, delete, at any time. Because who you are is linked to place, your ability to ‘pretend’ an identity is limited because the digital and real worlds are now integral and more readily verifiable through subsequent real-world connectivity.  This can enhance trust in the digital context – and as it happens undermines the trolls who thrive in the idea they are anonymous and even if identified are outside of your physical or jurisdictional reach.

Your identity is fungible yet more trustworthy in the context of your community

When place becomes integral to individual digital identity, it becomes a constructive element of community, which is the opposite of the proven and sustained issue of current social platforms which were born in a PC era that pre-dated the mobile phone and actively disconnects people from a real-world community. Because anyone can join and contribute to a Facebook group or a place-based Twitter # thread, community is undermined. That’s not to say such tools don’t have value, they do, as promotional tools to bring people into the physical place where localised RoI can be driven.  But sustained engaged value between the people of the place needs a different digital platform.  One that treats the digital infrastructure around place more as an Intranet available only to the people that are there.  Once you have such digital context you can invest with confidence in place-based community assets and services that can be reflected in the digital context.

The IoT and Community

The IoT are sensors and actuators, responding to highly localised activities.  While it sometimes makes sense for that information to flow to the internet cloud for aggregate comparison or control algorithm development, we would argue that the primary use case is localised.  Hence the information flows should be constrained locally to minimise the risk that the information or the systems themselves can be bent to a purpose outside of the intent or benefit of the community.

Communities need to control their IoT

There is great risk in the IoT that as communities rely more on the technology, that technology can be subverted to objectives it was never deployed for, usually by those that are outside the community, just as todays social platforms have as the Cambridge Analytica debacle exemplified.  Such subversion is largely outside of the control of the community, especially if enacted by another nation state.  But if the IoT device can only participate in community information flows because it’s there, and if the control of it is sustained by the community, we can erect intranet style cyber-barriers that can limit the problems that can occur and thus make them more trusted as technology support pillars for the community.

Let’s take the example of a high-rise buildings clad in ACM – one has to really feel for the thousands of residents stuck in them, knowing they live in death traps, unable to sell or afford to live elsewhere.  Imagine having children in such circumstances.  But now imagine a fire system where a fire in one flat is alerted to all residents with the detail of where it is exactly, not just a general fire alarm sound – but a text/image/map alert with updates every few minutes as each new detector detects the fires spread. Then imagine the fire brigade being able to tap into that community upon arrival, or even before, to obtain instant on the ground information and to broadcast action plans – and their changes of plans, or to be informed of who is hurt and where.  This is a true social platform connected to community need based on place.  The same platform means you now digitally know all your neighbours, you have a communal voice, you are empowered not just at resident meetings, but continuously to raise discuss and promote community issues.

Digital Community Constructs

So what are the key building blocks of digital community?  We believe there are 3 key elements:

  1. A technology model that integrates place into individual identity, yet does not track or even know location, so people can engage in confidence
  2. A trust model that shifts the locus of control and empowerment into the community and away from the social platform provider
  3. A business model that supports 1 and 2 – or more specifically does not monetise people as product, but instead is funded by those who wish to sustain the local engagement in a social norm context appropriate to their community so they can enhance community objectives, not undermine them

In short, digital community control needs to be in the community and needs to align with the business and social models of the community.  Place operators, whether companies or elected officials should be empowered to support their communities and to allow the competitive development of social norms in the digital space that is their communities.

Community Based Digital Value Delivery

So why would I as an individual connect to this digital community?  Its constrained, it does not connect me to the whole world, just to those in a place.  We believe it’s because social norms should dictate how a place is serviced – and that such a platform needs to be extensible to enable service delivery that is contextual to place – whether that’s simpler more accessible connection to security services so I feel safer (for example ask them to use their CCTV to locate my child that ran off), an ability to complain about cleanliness – and be advised when my issue is addressed. It could be about knowing what the venue operations team knows about crowdedness and how safe an area is to visit right now in a socially distanced context. Or whether it’s about deals in the shopping centre available due to stock clearance in a specific shop that are only available because I am there and can collect it and clear it from the shop right now?  Or maybe it’s about the event that’s going on in the town centre, what’s happening, whose there, what spin off local conversations are being had. Such a platform is about the discovery of who’s there, because people make places. It’s about real-time place-contextual conversations and we join in because we are of the community and we want simplified access to the services the community has to offer – services that make this place more interesting and engaging and supportive than other places I go to.

To be paid for your Data Use?

As people start to see the damage to society that current prevalent internet behemoth business models are doing, they seek to offer alternative approaches. One that keeps rearing its head is to be paid for your data and its use. We object to that idea on many levels – while agreeing with the basic premise that your data should be treated “as if” it belonged to you. In other words your rights over your digital self should be respected. Rights enshrined in human rights legislation. This is definitely a European vs US ‘legal’ style of approach, but being a European who worked in California in the late 90’s and early 2000’s I can say with surety that I feel far more in control of my digital self in the EU than I ever did or currently would in the US. That’s not to say I feel in control, far from it. But at least i have some tools to fight back with.

However, lets look at the basic idea of being paid for data:

This article https://www.theverge.com/2020/6/22/21298919/andrew-yang-big-tech-data-dividend-project-facebook-google-ubi kind of summarises latest thinking. It’s basically flawed because it will exacerbate the digital divide and further centralise power to those than can ‘afford’ to pay you. Those that can afford not to be paid will achieve superior privacy rights, those that cannot afford will have to subject themselves to an ever increasing ‘data use’ – society will be commercialised, even more than it currently is.

What’s needed is

1. A focus on minimising data needed (this includes limiting retention and transmission needs) to deliver a service – this should be a point of innovation where possible.

2. Platform approaches which disseminate data-use power closer to the edge so the context of use can be more readily understood and policed

3. Closer to the edge means ensuring/supporting greater national or even more local (state) regulatory oversight so they can compete to offer users a trusted digital space

4. Create competition for ‘trustworthiness’ between data users/managers/orchestrators to create an upward spiral instead of the current eternal downwards spiral of trust in digital services.

Data minimisation is critical because despite what many say, most people really don’t want to manage their data – one of the arguments for being paid is to make them care, a flawed argument as described above. However if they don’t care at all it’s left entirely up to legislators to police and that’s been well proven to be a slow reactive approach in which most nation states are massively out gunned by the tech behemoths that have evolved. The middle ground is to strongly police data minimisation, extol the virtues of platforms that do so, and so enable explicit consented mechanisms for data access by service providers along with clearly and simply communicated ‘purpose’. If data is minimised, taking people through these consent processes is no longer onerous and more importantly they become more informed – which directly equates to greater empowerment of the individual.

The other three points set an agenda for a new model of commercialisation – it re-instates localisation of regulatory power closer to the edge, it seeks to create competition between states and services to empower individuals – and therein lies the central problem of the Internet today – disempowerment of the individual leads to distrust in digital services and eventually to the digital apathy we have to day that allows the GAFA tech orgs to continue to exist – people just don’t think they can re-assert control, so they give up. Unless we equate a ‘replacement’ business model to re-empowerment of the individual we cannot address this central problem and we cannot re-enable community and cultural norms in the context of place thru digital – which at its heart is what the GAFAs have effectively killed off and drives extremist digital activity. This business model has to be centred around digital engagement services, not peoples data.

Event App Security – Is it taken seriously?

The recent Tory Party Conference app privacy and security breach raises an important question; do apps undergo security testing?

The answer it seems is – very few.

The immediate consequence falls on the end user – in this case several politicians have apparently had to change their telephone numbers – an issue that for any of us is a major disruption for friends family and work, but for a high profile politician the issues go wider to issues such as personal safety and security.

In April 2018 Security Intelligence scanned a generic set of mobile apps (not just event apps) and found 85% had exploitable vulnerabilities.  However what makes it likely that event apps exceed this percentage is that the vast majority are “one off”, built specifically for one event.  They are built on a tight time and cost budget, neither allowing for independent security analysis, or what is generally referred to as Pen (Penetration) Testing. In the case of the Tory Party Conference app, what is clear that there was not even time for proper app testing, as the vulnerability exposed in the CrowdComms developed app needed zero hacking capability! Any end user could do it and gain full account control of another user just by knowing their email address.

Earlier this year we demonstrated our Krowd app and the under development crowd safety and security features to West Midlands Police. They were interested and asked if we’d be ready for the Tory Party Conference – we said No. Simply because we knew we had plans to undergo security testing and they may not be complete in time for that event. Perhaps we were prescient?

Having undertaken an independent pen test, we can attest to its value.  We are a company that works hard to protect peoples privacy (world’s first GDPR compliant event app) and sustain company customer information confidentiality, as such we pay attention to security as the foundation for privacy in a digital age.  Despite this we were alerted to several security weaknesses to fix, and advised of a number of techniques to further enhance the security of the system.  As we are also undertaking product research and extensions for the UK Office of Counter-Terrorism we are reviewed by their programme agency the CTS Division of DSTL in the Ministry of Defence, who reviewed the pen test results and ensured certain additional tests were undertaken to address counter-terror concerns.  Overall this effort took a month of time and over 3 man months of engineering and consultant effort, a timescale which is not visibly costed into any event app price model that we have yet found.

The fact we have innovated one app that works at any event without modification means that these pen test value ascribe to every event our Krowd app is applied to.  The point being, that for the majority of the event app market, their approach to app development do not allow surety or confidence that either your customer data or their personal privacy won’t be easily breached.

More broadly, we are concerned the general app industry is a long way behind understanding the importance of security as a foundational aspect of building any software – especially one that takes data such as location, telephone numbers or other sensitive identifying information.  We should take the Tory Party Conference app example as a reminder to us all – ask your app supplier – “have you done a pen test?” Its the baseline for trust in digital systems.

John Trickett, the Labour Shadow Cabinet Office minister said “How can we trust this Tory government with our country’s security when they can’t even build a conference app that keeps the data of their members, MPs and others attending safe and secure?” to which we say “so tell me your conference app was independently pen tested!”

 

Krowdthink receives Innovation Award for crowd resilience, safety and security

Krowdthink innovation Award Baton

Krowdthink Ltd, developers of the Krowd® event and venue app for security and fan engagement, announces that it has been awarded best innovation in crowd resilience by Major Events International (MEI).

 

In front of an MEI audience of world cup organisers for Cricket, Rugby League and Netball, plus major International Games including Invictus, Commonwealth (Birmingham), Lima, Paris, Tokyo and Australia Gold Coast, Krowdthink’s Managing Director and co-founder was passed his award relay baton, symbolic of teamwork and trust.
Krowdthink’s approach is to enhance its venue-based communication and fan engagement platform, the Krowd®, to integrate venue security staff with the crowd they are charged with keeping safe. Krowdthink’s Krowd® innovatively combines a secure and private-by-design engagement app with features to enhance safety and security operations within high foot-fall spaces. It enables threat, event, venue and safety information to be shared via mobile devices, transforming ‘the crowd’ into a ‘virtual sensor’ to effectively identify threats and easily respond via alerts.

 

Following the Manchester Arena bombing and subsequent terror attacks in London of 2017, the UK Government Office of Security and Counter Terrorism (OSCT) within the Home Office sought to encourage innovation to improve crowd resilience. Key areas of interest from the call included improving the detection of threats from explosives and weapons within a range of high foot-fall crowded places and so reduce the chance of such future attacks happening again. The UK Defence and Security Accelerator (DASA) was tasked with engaging the UK technology community resulting in funding being awarded to Krowdthink to further develop their app which aims to make the crowd a participatory threat sensor and responder.
“We are deeply honoured to have been selected for this award,” said Geoff Revill, Managing Director and co-founder of Krowdthink Ltd “It is further validation that our innovation to extend fan engagement with the opportunity to participate in keeping the crowd safe, has huge potential at the world’s largest sports events.”
Dennis Mills, CEO of Major Events International adds “We selected Krowdthink for this award because of their contribution to one aspect of all major events – safety of those in crowded places. They have come up with an innovative solution which can help reduce concerns about suspicious circumstances and allow a rapid response to emerging risks. The award recognises the depth of their understanding of this vital area and we were delighted to acknowledge their achievement and help raise their profile to help them develop their business.”

About Krowdthink Ltd 

Krowdthink is innovating to disrupt the events sector by building one app that any visitor to any venue or any event can use without pre-registration or configuration, yet which is orchestrated and/or provided security support by the event or security team in the venue. We use existing communication infrastructure to build public intranets that enable digital engagement in the context of the venue or event. Our world leading GDPR compliant implementation of event tech means our solution is privacy respectful of the individual, and business customer confidential too, as we neither know nor collect any information about venue or event users. Krowdthink’s mission is to become the world’s most trustworthy fan engagement and event networking platform.

For more information, visit www.krowdthink.com or www.thekrowdapp.com, Company logo and Krowd event app logo can be downloaded from http://krowdthink.com/terms.php

 

Media Contact:

Email:krowd@krowdthink.com

Phone: +44 117 230 2344

About the Krowdthink crowd resilience research and development team 

Krowdthink’s DASA (Defence and Security Accelerator) and OSCT (Office of Counter-Terrorism) research and development project is ably facilitated by Bar Associates and supported by Wright NoW Solutions, Xebre, Telaugos Solutions Ltd and Right Objective.

Our Concerns on Blockchain – Please prove us wrong

before reading on view this excellent Youtube TedX talk on the value and benefit of Blockchain: https://youtu.be/RplnSVTzvnU

Note the speakers view on uncertainty addressed thru

1.Transparency of who you are dealing with, user controlled personal identity tool, visibility of transaction chain
2. Control of what aspect of identity you share to enable a transaction
3. Remedy/Accountability – she related back to transparency/control (bit weak as it assumes legislation actionable response capacity on part of the individual) but right idea
These 3 elements are the building blocks of Trust.
She said “What keeps the blockchain verified is our mutual distrust” – “converting uncertainties into certainties”
All good…now the downside – remember – blockchain is a PUBLIC distributed database:
a. Blockchain is in a massive hype cycle mostly based on most peoples lack of understanding of the underlying tech.
b. The tech relies on encryption as the TRUST point – if we 100% trust the encryption we can 100% trust blockchain – I guarantee time undermines every encryption based trust point. Because time allows compute capacity to exceed encryption resistance.  Not an issue if the blockchain encryption can be revised/updated – I have looked…no-one has yet shown me how this can be done.  Architecturally blockchain has not facilitated this which means from a security PoV its fundamentally flawed.
c. All blockchains use hashing, a one way encryption technique that is extremely hard to de-encrypt.  So time is on blockchains side, it won’t get hacked anytime soon.  But therein lies its medium to long term issue – if point (b) is not addressed as humans tend to forget that clock is ticking.
d. A dedicated quantum computer (already at 51 qbits) could break this hash today!  if a government body decided to dedicate a quantum computer to it – and why wouldn’t they if economies become based on it?
e. Unfortunately though, some blockchain systems use standard encryption as part of their infrastructure – in which case time is shorter for those block chains to become untrustworthy for some aspect of their service
f. The management of a blockchain always needs a security infrastructure – and as yet every one we have seen is effectively a centralised solution, thus undermining the distributed security value of blockchain, and where blockchains have been hacked to date, this is where the hacks happen, because such solutions are always hackable….refer back to (a) and the hype cycle.
Imagine a world built on blockchain for all transactions and then encryption gets broken – not a world we want to be in when those consequences fall.
That said – for digital business to flourish we need to find ways to trust the security infrastructure and blockchain is the best we have seen so far, but we really want to see the fundamental flaw of unupdateable blockchain hashes to be fixed before betting our customers trust on it while this hype cycle (which creates undue expectations on customers behalf) is in progress.
We could be wrong, we hope we are – but we have spoken with 6 different “experts” on blockchain and not one has yet demonstrated to us that our concern is unfounded.

A Business Philosophy for a Better Internet

When Digital meets Real - Trust is more important

In a world where digital and real-life are every more closely entwined, what is a better Internet? 

Simply put, it’s a place I can go and trust that I can lead a digital life that is as free as my real life is. However our digital freedoms, and consequentially our real world freedoms, are undermined by one simple thing, the commercial. reality of funding the Internet

“We need a new business model for the Internet.”

If we take a small step back in time, a mere 30 years, to the birth of the Internet, visionaries in Silicon Valley and elsewhere saw its awesome potential. But the sheer scale of the opportunity had to be funded somehow. It was recognized data was the key, and personal data in particular could be a new tradable asset. But who would buy it? The simple answer – advertisers. They spent $0.5 Trillion in 2016 and this is expected to be $0.73 Trillion by 2020! Of course this is just the tip of the personal data value iceberg, with Apple and Amazon combining this data value into shopping and device service of equivalent revenue scale. Apple + Amazon + Google + Facebook 2016 revenue, when combined, is greater today than the GDP (Gross Domestic Product) of 88% of countries, that’s more than 176 of the world’s 196 countries! These digital entities dominate in their various spaces and operate near monopolies over our personal data and through that our digital lives and thus our forming digital society. I would argue this advertising (and thus personal profiling based) business model was a necessity 30 years ago, even 20 years ago, because no-one was personally willing to pay directly for services for which they had no comprehension of the value. Yet now we cannot conceive of living our lives without them. Try committing to not use just 2 of these 4 companies products ever again – its virtually impossible for the average person on the street to imagine.

The issue is that these services, and thus our personal data, now power entire economies. Governments are unwilling or unable to challenge these dominants because they’d have to undermine digital service to the voting citizen. In short they are stuck and have to engage in the process of harvesting our personal data, or at least participate indirectly, by directly funding advertising through these entities in order stay in or gain power – the recent debates on Brexit and Trump campaigns, show how paid for digital influence by governments is suborning democracy, raising serious cause for concern for our freedoms as citizens.

The trust-undermining issue with advertising as an Internet funding model is that it’s highly competitive. The only way for one digital agency to succeed over another is to provide greater targeting – which in turn requires ever more detailed profiles of who we are, where we are, when we are there and even better, predictive analytics of what we’ll do next, all obtained using ever more deceitful techniques. Facebook now profiles its users in over 52,000 different ways! The advertising model can only grow by delving ever deeper into our personal data. The digital dominants sustain their position by knowing more about you than anyone else in their sphere of business. This means your social lives (Facebook), your purchasing power/habits (Amazon), your interests (Google search) and your physical movements (Google Android and Apple iOS) and a lot more.

“Advertising drives a commercial and economic need to digitally model every aspect of the real world and our real lives.”

The initial advertising funding model was developed to solve a problem, that problem has now been solved. Everyone ‘gets’ the value of the Internet. So why do we persist in pursuing the same old commercial model when the harms to our freedoms are becoming so apparent? All this data stored in the cloud is fundamentally insecure, no one can assure you or me that it can be maintained for the rest of our lives, yet this is what Apple, Facebook, Amazon and Google would have you believe, despite the daily insecurity evidence to the contrary. Even if it was secured 100%, you and I as citizens have no idea who has it or for what purpose it’s being used, fundamentally undermining our privacy, and through that our very trust in Internet service provision which is demonstrably declining every year.

So what has changed that can allow a new business model to develop? Perversely, it’s the very success of the Internet. Digital is everywhere, in some countries more people have a smartphone than have a toilet or access to running water. Digital has become integral to our daily lives, it’s almost a necessity, certainly for a functioning economically progressive society able to compete in the global market.

In some parts of the world (the EU especially taking a global lead) digital businesses are facing a tsunami of legislation. We at Krowdthink are big fans of the GDPR and have great hopes for the E-Privacy regulation in development, as well as standards like eIDAS and other regulations. Not because regulation is good, but because they are trying to inculcate trust and freedoms into digital society. It has also become necessary to curb the slide towards distrust in digital services, but with a stick not a carrot. Business won’t ‘really’ change unless it sees commercial value and opportunity in changing – there are too many businesses that have faced certain demise if they don’t change and yet persisted with the same old models. In short the Internet itself is ripe for disruption – and disruption won’t come from technology, it’ll come from a change in business model and digital market organization.

I have worked in real-time control systems for over 25 years and what we at Krowdthink see is that the Internet, and indeed the burgeoning Internet of Things, has moved from a transactional IT style infrastructure towards a real-time system of systems. Data is flowing continuously everywhere – personal data is often referred to as our digital exhaust – as we leave trails everywhere we go and in everything we do. But an exhaust is not a left-over to be stored and processed, its live information that can facilitate business opportunity. Through research and real world application we have a fundamental understanding of how people interact. The Internet has given us this understanding. Safely executed research can give us the analytical tools to use data in real-time without the advertisers need to subsequently track and profile all we do.

“What has been missed is the opportunity to build a new business model for the Internet based on real-time information flow. We don’t need to profile, track and model every single person on the Internet in order to deliver them digital value.”

Collating historic profiling data puts people at risk to the nefarious criminal element that are showing their strength vs the weakness of the Internet. The criminals’ goal is access to personal data for subsequent fraudulent activities, or the denial of access to data for service denial. Our data is both the strength and weakness of the Internet. What if we did not collect data? what if we minimized what was stored to almost insignificant levels? What if we built transactional value on real-time data – information volunteered and made available at a point in time in order to gain access to digital value? What if we re-invented existing services using this model? We at Krowdthink believe strongly that many (not all), internet services can be re-invented this way – and by doing so, by making the data attack surface smaller, we will end up with a better more trustworthy Internet, an internet where we’ll all engage more confidently and actually share more.

“Let’s monetise the engagement opportunity – not peoples personal data”

Redefining Trust in Digital Engagement

Krowdthink has spent a long time researching and thinking about how to build a business that is fundamentally trustworthy when delivering a digital service. This blog summarizes our implementation. The simplest expression is our Trust Pyramid. Its articulation has been described by some as an ethical business model, we make no such claim but appreciate the perspective.

trust model

Trustworthy

Too many organizations think of trust as a binary condition in which some customers trust the entity and some do not and the aim is to get a higher percentage to state their trust. But such goal setting subverts the whole objective of trust. It becomes a game, and often times, especially in online terms, its less about being worthy of trust and more about gaining trust often using methods that are not worthy of that trust, a tactic that time will always reveal. What’s needed is a strategy for the company that gets stronger as time reveals the companies true activities. Hence fostering a company culture that seeks the unattainable goal of being trustworthy is the only sustainable approach to addressing the online trust deficit built up by 15 years of business models that gamify the customer engagement process.

There are two primary perspectives of trust that differ but share the commonality of customer/user confidence:

Social

An attitude of confident expectation that one’s vulnerabilities will not be exploited

Business

Confidence that the value exchange is fair and equitable and that loss of trust drives an equivalent/proportional consequence on both parties

The issue of consequence will be addressed later, but is a critical component for the development of trustworthy commercial engagement.

Empowering the User/Customer

Trust is obtained through the commitment to empower the customer/user in the engagement process in an open balanced mutually beneficial way. There are three pillars to empowerment that a business must engage in to foster customer/user trust. In the digital context a useful empowerment thought process is to consider all personally identifiable data, whether directly obtained (name, address etc), indirectly obtained meta-data (how a customer engages, when, where etc), or derived (information created through algorithm or correlation with other data sets), as conceptually belonging to the customer/user. The business is merely its temporary custodian. In the EU the GDPR will legally enforce this perspective.

Transparency

The business must be utterly transparent with its customer/user about what Personally Identified Information (PII) it obtains from a customer/user and how that data is used. It should be open about how PII is secured at rest and on the move.

We suggest publishing all data types relating to PII with simple descriptions of use and how that data is secured.

The GDPR is driving the evolution of new consent models with consent receipts that could eventually replace this publication process.

It is recognized that the majority of users will have limited ability to assess the veracity of this information, but if all businesses followed this approach journalists will get savvy and assessment methods/entities will evolve to determine whether the data processing meets the standards set out in the rest of this model.

Control

The company should maximize the ability of the user/customer to control what PII is shared with whom, whilst ensuring a clarity of understanding of the limitations of purpose such PII is shared for. This is both a UI issue and a UX issue. UX is more important because the assumptive perspective of the customer/user should always be the one that holds true. Control also means the ability to withhold sharing of PII whilst still operating constructively in the digital engagement process. Of course many online services will not function without some level of PII sharing, but innovative thought about the minimization of what’s needed to deliver a capability will help facilitate simple communication of control over the limited amounts of PII are needed to drive the online engagement function.

Remedy

Many trust papers talk about accountability as the 3rd empowerment strut for trust. However this often defaults to an understanding that it’s accountability to the law. The law is always the lowest possible trust bar; it’s a codification of privacy and security principles that sets the lowest standard for operation. Worse, it is defined mostly by business and their pro-commercial lobbyists, worse still, rarely does the individual have the financial or technical or legal capacity to pursue their rights under the law. Most privacy lawyers have become risk mitigators for their commercial clients.

Truly trustworthy companies seek to empower their customers/users with the ability to take their business to a competitor when/if their trust is lost in their current service provider. This is an extremely scary statement for most commercial entities who primarily seek customer lock-in, whilst not realizing that trust is the best lock-in of all. In fact the act of coerced lock-in is a degradation of trust and is potentially unsustainable for the business.

To get trust one must first trust. Trust your business to do right by its customers, trust your customers to do right by you in return. When you screw up this trust will be rewarded with flexibility and time to address the screw up.

When trust is lost and one party feels they suffered a more significantly consequential loss than the other, then recovery of trust becomes even harder.

The GDPR creates fundamental new rights for customers to both delete their PII and to move it to a competitor anyway. Far better to embrace these legislative requirements as business trust assets than to fight the law. Empowering users with the simple and unobstructed ability to delete and move their data is an empowering trust asset, as the customer now feels they have, individually and en masse, the ability to inflict consequence on a company and thereby ensure a change of operation in their favour. In this hyper-connected world they can anyway, you may as well embrace it as a business asset.

Development Principles

To deliver against the trustworthy aspiration, one has to foster a culture in the company that consistently strengthens the trust model. It has to cover both technology and process in both development and innovation, as well as in the commercial engagement model.

Private-by-Design

The 7 principles of privacy-by-design call out aspects of the trust model and security model needed, focused on the context of assuring customer privacy in the digital engagement process.

If properly followed you will end up with apps and services that don’t need privacy settings, because each stage of sharing becomes an explicit opt-in. In fact arguably a privacy setting is a privacy fail!

The explicit opt-in thought process, or consent model, is central to achieving PbD. PbD is not about not sharing, its about being in control of what PII is shared with whom for what purpose, endorsed by a confidence that the service provider protects the user/customer from its own or 3rd party inadvertent or deliberate use of their PII outside of their comprehension.

Secure-by-Design

It needs to be recognized that security is not an absolute. It’s an arms race and every digital company will suffer a breach of security if it operates long enough. So it’s critical this objective is seen in the context of the next objective of data minimization. Data minimization is a solid underlying principle of security design. Keep the value of what PII you hold to a minimum, its as much a liability as an asset.

Consider security before writing the 1st line of code of a system. Adding security later is harder and more expensive, in some cases impossible without re-architecting the system. So architect for security don’t just code for it.

All security relies on a trust point, even encryption. So remember every security system is as fallible as the human processes that surround it. Security is thus a process of privileged access management, both in human terms and coding terms. Hence it has to be inculcated into the culture of all aspects of the business.

User Data Minimization

This principle underpins both PbD and SbD. Given no system can be guaranteed to be secure it is incumbent upon a trustworthy entity to minimize the PII they hold so that when a data breach occurs the consequence to the individual is minimized. Time is the greatest exposure tool for every digital system, even encryption gets progressively easier to circumnavigate as time progresses. In SbD it’s a key principle, understanding what data you hold and how it is used is critical to architecting a security system. Less data ensures less complexity in this securitization process and thus less risk of insecurity in the system.

Don’t obtain PII you don’t need, don’t store what you don’t need to, don’t transmit what’s not needed. These simple rules maximize the potential to keep PII private and minimize consequence of a security breach. Properly embraced they lead to innovations that strengthen the trustworthiness of a company.

No Covert Profiling or Tracking

This principle needs to be called out explicitly because it is the default operation of almost all online services today, whether the simple website or the complex app. Technologists who can ‘cleverly’ obtain information about users, where they are, who they are, what they are thinking, their state of mind etc, all without communicating explicitly to the customer/user that is what is happening is central to the digital engagement culture of developers everywhere. Reversing this trend requires an explicit recognition of it and a topline commitment to not operate that way if there is to be any chance of a constructive reversal in the culture of digital engagement models.

Open Business Model

The default business model of the Internet is the fundamental cause of the spiral downwards in trust in digital engagement. The pervasive sentiment that because I go online the normal rules of society no longer apply is driving dishonest engagement, reluctant sharing and active obfuscation. A digital society that drives these norms is not a constructive one, its destructive. It seeks to set new norms. Yet these norms are not fostered in constructive debate, they are covertly forced upon us and then validated when time exposes them, all in the name of commercial endeavor that primarily benefits the four horsemen of Google, Facebook, Apple and Amazon.

The Internet is still in its infancy, and the current business models were needed to enable innovation as no individual consumer was prepared to pay their way directly, so instead payment has become primarily through profiling of people to allow ever greater insights into their needs/wants to better target the adverts or service. This model will continue well through the Internets teenage years and into its young adulthood. However there is clearly a community of users that desire a more constructive digital engagement model, one in which they are respected and thus can trust their suppliers. The only way this can be fostered is to be totally open about the business models of the new breed of businesses. Allow the business model to be debated and validated, only then can trust be fostered.

Most existing online business models can be flipped, offering users/customers new methods to engage in which they are empowered over what is sold to whom for what purpose.

We at Krowdthink are focused on the engagement model, seeking real-time data use to create unidentifiable groups of people in real-time that share common targetable advertising assets such as location, time and contextual interest. We thus make real-time engagement the revenue asset without exposing any PII nor sustaining any profiling history – advertisers target the app not the people, the app groups people in real-time. Advertising value is determined not by their individual profile/preferences, but by their value as a group, or in our case a crowd. The trick in our case is to not need to profile people in order to group them, they group themselves at a point in time, in a place, around a specific interest. We have the added benefit that value is driven to the place owner first as they are co-located with the Krowd and thus have a real-time contextual capability to deliver advertising engagement value, thus opening up a new engagement opportunity for businesses otherwise forced to digitally engage via the personal profiling business model that our temporally and locationally displaced Internet demands.  Managing this advertising process so it does not intrude on the connected experience is as much of a challenge for us in the Krowd app as its is for Facebook or Twitter, however we have a simple ad block tool, an annual subscription to the app can block ads or allow users to tailor and select the ads they want to see.  These business models will be implemented by Krowdthink once we have a sustainably large customer base.  The business model does not intrude on privacy at all nor does it require users to think differently to existing revenue models internet or app users are familiar with, but its implementation balances their interests in privacy with the need for our business to cover costs and make a profit.

A Philosophical View of Digital Society

We often get asked by investors, why don’t you take the easy option, you have a solid idea for localised digital engagement, just get the data, sell it and make us all lots of money.

The short answer comes back to this – do we want to contribute to a digital society where this is the modus operandi? Is this the digital society we want to leave behind for our children?  The short answer is no.

But we have to be realistic – there are two tsunami’s of pressure that make it difficult to stand against this sort of way of commercialisation of online engagement.  The first is that the economic imperative will always pressurise digital societal and individual rights – this is as true for the way governments operate as it is for the individual business.  The law is usually insufficient to defend against this pressure because it is retrospective in the face the rapid change in technology, and it’s written by humans, so its open to interpretation by those with funds to do so, and many citizens don’t have the intellectual or financial capacity to exercise their rights. Commercial and economic pressure pushes hard towards favouring the economic opportunity.

The second issue is that commercial law trumps privacy law. I don’t mean explicitly in law, I mean in terms of implementation.  Every director of every company has an overriding imperative to return a profit or financial return for its shareholders.  In digital society personal data is the currency traded to deliver on this imperative.  The law constrains some uses of this data, but the insignificant fines and the weaknesses in the enforcements structures nationally and especially internationally, means that personal privacy has become a cost of doing digital business.

But consumers are starting to become informed about how they are traded online and there is a growing resistance to it.  It’s mostly reflected in the media as security breaches, big businesses are aware its becoming a trust issue.  But there is a big change coming – the General Data Protection Regulation is seeking to get ahead of the laws traditionally retrospective position in digital society.  There are real consumer powers in it and massive fines for those businesses that do not comply.

The innovation opportunity is not to just embrace the regulation as a positive construct, its to recognise that the commercial status quo of digital engagement is about to change, and that change can be accelerated by innovators embracing the underlying principles of privacy en route to constructing a better digital society.  In short to compete on the basis of privacy value and trust in digital engagement.  We can disrupt if we do this.

The only answer to the two commercial pressures outlined previously is to make privacy an asset worth more than covert trade in our personal digital assets.  When that happens governments and businesses will combine to create a better digital society for our children.  This is what we strive for.  It’s why I am driving the creation of an event called Privacy: The Competitive Advantage. Hope to see you there.

 

 

Privacy Settings are a Privacy Failure

The EU GDPR (General Data Protection Regulation) being written into statute this month explicitly calls for those storing or operating on personal data to follow the 7 principles of Privacy by Design.  The 2nd principle of which is “Privacy as the Default Setting”.

If you follow the simple logic that all operations on your or my personal data are private by default, then really, there is no need for privacy settings – none. In fact the number and complexity of privacy settings can be directly correlated to the inherent lack of privacy in the platform or product you are using, generally driven by the platform providers business model of the monetisation of you.

As an app developer who fully embraces these principles, it is notable that our Krowd app has no privacy settings function in the app.  By starting with respect for peoples data such that we treat it as if owned by the individual, which means maintaining provenance of all data and meta-data and derived (analytic) data, then every share has to become an explicit opt-in decision by the user, plus the app interface should make it clear what is being communicated with whom for what purpose. This is the essence of privacy.  Privacy is a function of control of what is shared with whom and why, it is not a lack of sharing.

Maintaining provenance also allows us to follow another GDPR principle – the right to delete.  Something incumbent platform providers will find almost impossible to implement in principle without having tracked provenance.

When the business model of a social platform, like Facebook, is to monetise who you are, then they have to start with the basic assumption that everything you share belongs to them not to you, as does how and when you share (meta data) and what information can be derived (through analytics) from the aggregate of all this data. Hence Facebook’s privacy policy makes it clear, all your data belongs to them.  They use privacy settings as a means to tick legal box requirements and to give a limited level of control over some of the data (none of the meta data or derived data, something that will get challenged in the future we suspect) back to the individual. On the flip side this also means that there is literally an infinite number of ways they might use your data in a manner that you may feel breaches your privacy.  Hence their platform, challenged by the latest GDPR legislation, ends up with an ever increasingly complex set of privacy settings – a list that will only get more complex over time, eventually (if it has not already) defeating the very objective of user empowerment with control over the use of ‘their’ data through those very settings.

Wi-Fi Location Privacy as a Commercial Asset

Data-Privacy

The Big Data mentality that pervades almost all Internet deployment technologies, services and apps, tends to think of location data as the diamond in the Big Data treasure chest. It is the most insightful of data. It can be used to indirectly determine interest in things around people, it can discern who are friends or colleagues even if they never connect digitally, it can be used to spark events such as the making of a specific coffee order for a customer as they enter the café.

When aggregated and correlated with other data it can ultimately define, in depth, who someone is. Best of all – it requires no user input. It’s a passive monitoring facility. So why is it that despite smartphones having GPS for over 15 years we are not seeing widespread use of the data? except in mapping/routing services and of course Uber – although they quickly got in trouble for not being careful enough with it.

Part of the answer is of course legislative, at least in Europe. The mobile operators have been tracking our smartphone location for years, they even commonly get us suckers consumers to tick a consent box when we get our sims, it allow them to use this data for whatever commercial purpose they like. But they are very careful how they use it. They understand, much like Uber did not, that unfettered access and careless use of such data is a potential nuclear bomb for their brands.

But the real answer is the simple observation that users don’t or won’t opt-in to location tracking unless they see an unambiguous immediate benefit. In short it creeps them out. They may not understand how the meta-data of their myriad online interactions profiles them, but there is an instinctive awareness that having your location tracked provides insights they’d rather not share.

It’s a trust issue, just like the Uber issue became. In 2013 only 11% of mobile app users stated they’d be willing to share their location data in a mobile app. That percentage may even be slipping to a lower % today as users become more informed of how they are being tracked and profiled.

So what’s the response of business? In general, to try and collate the data indirectly. For example 59% of retail fashion stores in the UK now use facial recognition cameras to track shopper movements. Is this legal? Very doubtful, but currently untested. The main protection is to claim it’s done anonymously. That’s hard to do with something as specific as a facial image definition! In fact it’s mathematically arguable that there is no such thing as anonymity in Big Data.

TrustPyramid

It’s why the ICO’s around Europe carefully use the ‘best efforts’ clause to interpret anonymous data control. But what does that mean? Again untested in law. So in short, companies play fast and lose with this data and pray they are not the ones to get caught. But there is a big legislative change coming.

The root and branch revision of the Data Protection Act (the GDPR – General Data Protection Regulation) is due to come into legislation in 2016. Not only is it a tighter definition of privacy, updated to deal with modern tech capability, it raises the bar of fines from a few 100K Euros, to 10’s of Millions of Euros or more! Privacy is no longer something that can be dismissed as a cost of doing business.

So how do we unlock Wi-Fi based location value? The short answer is that what Wi-Fi can do without any knowledge of location is to co-locate people. Creating the opportunity to bring the cloud down to the crowd, delivering localised digital engagement in the context of location-private solutions.

But tech alone does not unlock the location value proposition – what’s also needed is an engagement model designed to engender user trust in the service provider. By gaining user trust we can foster localized engagement and through that unlock, via opt-in mechanisms, localized commercial value.

To that end Krowdthink has spent years researching and evolving a trust model. We will present and debate our trust Pyramid when we formally launch.  But we are already seeing institutions like the Mobile Ecosystem Forum start to try and define a trust model, the UK Digital Catapult has a Privacy and Trust initiative that will soon birth their methodology for trusted digital engagement.

Privacy, placed in a trustworthy engagement model, will become the next commercial value proposition for businesses.