past, present, and future
Lyra Communications Ltd,
Covent Garden, London
A nonprofit organisation
We were born at a very special time: we were part of the last generation in the West ever to grow up without the Internet. We grew up writing letters, notes and postcards. Today, however, almost all of our written communication takes place on the Web.
In this article, we look at the past history of online conversation and the problems that affect online discourse today. We then present Lyra, our attempt to do better.
The past: a diverse mix of services
IBM 7094s, which ran the world's first online conversation platform.
Humans are social animals: we love to share, discuss, argue and communicate. This love is the fuel and the fire which underlies some of the most influential companies the world has ever known: the social networks. But online communication started very simply - before the Web, and before the Internet.
TALKOMATIC, running in 1973 at the University of Illinois.
Many technologies old and new have been pressed into the service of communication. Before the discovery of electricity, we used smoke signals, beacons, flashing mirrors, and semaphore. Electrical wiring brought the Morse telegraph key and Telex machine. Computer technology was no different.
The first known system aimed at online communication was created at MIT in 1965 - 52 years ago. MIT used a mainframe system: a big central machine, like the IBM 7094s shown, was linked to dumb terminals all over the campus which students and staff could use. Users used to communicate by leaving files, with names like "TO TOM," in the public folders. This wasn't a system explicitly designed for communication - but people found a way. Then, Tom Van Vleck and Noel Morris wrote two commands: MAIL would write a message to another user's mail box, for later reading, and WRITE caused a one-line message to pop up on another user's screen.
Even this, the very first system designed just for communication, showed the difference between real-time (synchronous) and stored (asynchronous) messages. But there was a long way still to go. By the spring of 1965, Morris and Van Vleck had written a complete mail system for the MIT mainframe; but it did not support messages with subject lines, colour or graphics, or sending to more than one recipient. And it could only send messages to users of the same mainframe on the MIT campus.
ARPANET in 1977
A real email service had to wait for a network. By 1973, the US Department of Defense had established ARPANET, the Internet's forerunner, and took an organised approach to developing an email system. Ray Tomlinson, contracted to ARPANET, is known as the inventor of modern email: it was he who decided on the @ symbol to identify the addressee. The system he designed is still in use today, with plenty of extensions (fonts, colours, images, layout and formatting).
The Internet gave us infrastructure to link computers together, but not a user-friendly top layer: that would have to wait until the World Wide Web. But the Net still enabled what many see as a golden age of online communication: the bulletin board. This filled a gap left by email: discovering new people. If you receive an email, it must be explicitly sent to you (or a mailing list you're part of). The bulletin board (BBS), by setting up a public forum, allowed a new mode of online interaction: talking with strangers, making new friends (or enemies), and discovering (or rejecting) new points of view.
Two examples of BBS systems.
An acoustic coupler phone modem, as used to connect to early BBSs.
The first BBS was developed in Chicago, during the Great Blizzard of 1978, by Ward Christensen (who owned a spare computer and modem) and Randy Suess (who installed them at his house). Before automatic modems became available, users had to use a normal telephone, dial it by hand, and place it into a cradle on their modem - and at the very beginning, only one person could dial into Christensen & Suess' BBS at a time.
The BBS had one important competitor: Usenet, a distributed, constantly-updated stream of articles and posts divided into categories like humanities.education and sci.psychology. There were many BBSs, each different; there was only one Usenet, shared and copied among servers all over the world, and giving access to the same list of groups.
Bulletin board systems and Usenet were the backbone of online communication until the rise of the Web, originally proposed by Tim Berners-Lee in 1989. Rather than dialling into a BBS, users could simply type the address into their Web browser.
The Web brought a profusion of forums, chat rooms, and news aggregation websites. It also brought the social networks.
The present: the industry of communication
There are a few BBS's still operational (PTT, based in Taiwan, still serves thousands of users during peak hours) but most require special software to connect to. The distributed network that is Usenet is also still working, easily accessible (and archived) by the Google Groups service. But their past glories are behind them, replaced, for the fast majority of Web users, by the giants of online communication: the social networks. The top 6 have between 2 billion and 16 million active users (at least one visit per month):
|Network||Users active at least once a month|
|Snapchat||301 million||106 million||Google Plus||16 million|
These networks are at the core of modern online communication. Email is still in everyday use, but has become a tool for business and life administration rather than communication (it was never useful for discovering new people anyway). Most of our online friendships, debates and acquaintances happen on social media. Worldwide, the average Internet user spends over two hours on social media every day1. This is not surprising - people love to communicate, share, debate, and argue. But what are the forces that drive social media, and are they beneficial to online conversation - or harmful?
Yeah so if you ever need info about anyone at Harvard, just ask.
I have over 4,000 emails, pictures, addresses, SNS
People just submitted it.
I don't know why.
They "trust me"
- Mark Zuckerberg, 2004
The words of a reckless 19-year-old, surely. After all, Facebook's mission statement is to “Give people the power to build community and bring the world closer together.” Facebook is a mature company run by adults. Their approach must be balanced and safe. Right?
Let's take a closer look.
Social media and political influence
This May in the UK, Facebook hired former advisers to senior Labour and Conservative politicians "to help politicians and governments make good use of Facebook."2 In the UK's 2015 election, major parties spent over £1.5 million on Facebook advertising.3 It's normal to spend sums like this on traditional advertising - but Facebook, for the first time, allows political messages to be targeted specifically at voters of particular nationalities, opinions and backgrounds.
Alexander Nix, CEO of Cambridge Analytica, presenting his firm's psychometric profiling toolkit.
During President Trump's election campaign, 100 staff ran its digital operations division. From April 2015 to February 2016 - before the campaign was fully under way - it paid $53,000 to Facebook.4 As the election approached, digital spending rose to $70 million per month, a sizeable proportion of which went to Facebook.5 By the end of the campaign, its voter database - called Project Alamo - contained over 4,000 data points on each of 220 million US citizens. This data came, among other sources, from certified Facebook marketing partners and from Cambridge Analytica, which uses Facebook activity to estimate a user's personality traits.5
Cambridge Analytica is currently under investigation by the UK's information commissioner6 over its connection with the pro-Brexit campaign Vote Leave, which also spent over £3.5 million on targeted advertising through the social media contractor AggregateIQ.7 The other major pro-Brexit campaign, Leave.EU, was strictly barred by electoral law from sharing data or coordinating with Vote Leave. However, a leaked agreement shows that the data firms employed by the two campaigns - AggregateIQ and Cambridge Analytica - had a close working relationship, and sources indicate they shared the same database.8
These accusations could be exaggerated, but if they are accurate, they represent a serious infringment of electoral law and, essentially, voter manipulation. Even if they are not, one fact is undeniable: Cambridge Analytica has access to a proven, scientifically informed mathematical model which, given access to a person's Facebook likes, can predict their political leanings better than their spouse.10,12 With more data, the model makes better guess than the person themselves.11,13 There is also convincing evidence, from a study conducted by the Online Privacy Foundation, that psychometrically targeted adverts are actually effective at swaying decisions.14,15
Actively working to package up and sell user data, as well as to encourage governments and opposition parties to buy targeted ads, does not fit well with Facebook's stated mission to "give people the power to build community." It allows any agent with sufficient funding to manipulate the community's opinions. But what about the individual level? What effect do social networks have on personality and behaviour?
Social media and psychology
The most obvious problem is harrassment. 41% of Americans have been personally subjected to harassing behavior online, and 66% have witnessed these behaviors directed at others.16 In a 2016 study performed in Belgium, 29% of adolescents had experienced unwelcome sexual or gender-degrading comments.17 In nearly half of cases reported to the social network, no action was taken.
On Twitter, dozens of automatic messages from bots pretending to be people.
These problems are enabled because social networks do not provide effective tools to manage harrassment. Sometimes, your profile is public, and anyone can send a message to you; this is problematic if you are a target for harrassment (for example, a celebrity or a member of a minority group). If you have the option to restrict your profile to a group of contacts, there is usually no finer control: no way to divide your contacts into friends and acquaintances, no way to establish an inner circle whom you trust.
Apart from harrassment, social networks cause serious and undeniable psychological changes. Since 2012, rates of adolescent depression and suicide have increased dramatically. Adolescents aged 13-14 who spend 10 or more hours a week on social media are 56 percent more likely to say they’re unhappy than those who devote less time to social media. Among 13-14 year olds, those who spend six to nine hours a week on social media are still 47 percent more likely to say they are unhappy than those who use social media even less.19
Teenagers who visit social-networking sites every day but see their friends in person less frequently are the most likely to agree with the statements “A lot of times I feel lonely,” “I often feel left out of things,” and “I often wish I had more good friends.” And 13-14 year olds who are heavy users of social media increase their risk of depression by 27%.19
Social networks present posts to a very wide audience: the user's contact list (Facebook) or the entire world (Twitter). Writing a post for public view is very different from writing a message for a specific audience. Research has shown that social network use leads to a restriction of expression and the careful management of online personas. This "chilling effect" has recently been shown to extend into offline, face-to-face interactions as well.24
Finally, social networks present another worrying network effect: the spread of misinformation.
These are worrying effects for networks that claim to enable communication. But we must remember that these networks are run for profit, and have shareholders and board members to satisfy. They need to find a revenue stream, and when offering a free service, some of the only ways to earn money are through selling advertising and selling data.
Social networks: ad networks, not services
It is thus a cause for concern when a Facebook advertising document claims to offer targeting to display ads at “moments when young people need a confidence boost.”26
Facebook, selling its real product: advertising.
Even if social networks have good intentions, bad things can still happen. Imagine, for example, that Twitter changes its news feed algorithm with the completely accidental result that more extremist posts appear on users' news feeds. This makes users angrier, and so they spend more time on Twitter, driving up metrics and ad revenue. Without any immoral intentions from the designers, networks can "evolve" to act in damaging ways.
Modern social networks are not services which enable the needs of the individual; they are tools which enable their own existence. For each one of its users, Facebook earns over $16 each year in advertising and data sales revenue, with a total net income of $10 billion in 2016.27 Its users are not customers: they are doing the work that enables this revenue.
The only way to develop a tool which nurtures positive communication, doesn't allow its users to be manipulated, and respects their attention and their mental health, is to design it that way from the ground up.
The future: open, nonprofit online communication
How do we work towards a world where online discourse is free of interference by targeted political messages, commercial interests, and profit?
Lyra Communications Ltd is led by a cognitive neuroscientist and an experienced software architect. Our platform, Lyra, is a conversation service with a single purpose: to enable open, interference-free online discussion.
Lyra protects discourse from external forces with two simple rules: we will never pass on or sell users' data to third parties (or buy data on our users from third parties), and there is no advertising on our platform.
An ethical business model
We believe that the best way to support open online conversation is to remove the influence of profit from the design of the platform. Lyra is committed to operating under a nonprofit business model. We are currently a UK limited company, but will eventually operate as a UK community interest company (CIC), with legally binding restrictions on how our assets are used.
We do not seek investments which require a return. Rather, we are funded by a very low subscription fee (currently £2.99 per year) paid by our users.
Between 2005 and 2016, the proportion of the world's population with access to the Internet jumped from 16% to 47%.20 Open online discourse should be available not just in the developed world, but to all Internet users. Many social networks deal with this by offering a "free" service paid for by advertising revenue. Our approach is to waive our subscription fee in countries whose GDP per capita is less than 10,000 international dollars per year (currently, 82 countries).
Putting the user in control
Styles on Lyra. The user should be in control of the visual style they use to view a conversation.
Lyra's central concept is the conversation, a space which provides tools for its participants to comment and discuss. A conversation is either private (only specific people can see it) or public (open to the world). Each conversation is started by its owner, who controls whether it is private or public.
If a conversation is private, it needs an audience - a list of people who can access it. The audience is always shown in the right menu. Public conversations don't need an audience, as they are visible to the world - but they can still have one. In all cases, the owner can add and remove people from the audience at any time. The audience is also notified when the conversation is started, and kept up-to-date with messages therein.
The conversation ensures one of Lyra's principles: control. For-profit social networks rely mainly on public posts, whose audience is either the entire world (e.g. Twitter) or the entire contact list (e.g. Facebook). Intuitive options allowing the audience to be restricted to a particular group are never given. Lyra allows a conversation to be made private at any time, and its audience to be set by the owner.
Choosing what you read
The main point of interaction with a for-profit social network is the news feed. This is completely under the network's control. While there may be limited options to show more or less of a particular type of post, the news feed is generated by the network's algorithms which aim to show the most "engaging" posts. What are engaging posts? Those which the network's users interact with the most, allowing the network to generate more revenue from views of advertising.
News feed algorithms can have unpredictable effects. Most networks have removed the option to see posts in chronological order, or made it difficult to use. Facebook's news feed, for example, prioritises people you've interacted with, creating filter bubbles which reduce the diversity of posts you see. Experiments have shown that this encourages the spread of misinformation and fake news22 and has harmful effects on personal wellbeing.23
Groups on Lyra allow you to control reading and conversation audiences. Group names and members are private to you.
Lyra approaches this problem by allowing full control over your news. You can place your contacts into groups which you name yourself (friends, family, colleagues, acquaintances, enemies, the bowling team, or whatever you like). These groups are private to you: others cannot see what your groups are called or who is in them. When you read your news, you can select particular groups to read news from. According to your wishes, you can read conversations from just friends, or just family, or just the bowling team - or any combination thereof. You, rather than an algorithm with unclear aims, are in control.
You can also easily add a group to the audience of a conversation. Because the audience is always shown to people in the conversation, group members will know that each other are in the audience - but they will never know that you have put them in a group, or what that group is called.
Groups fulfil a fundamental human need to stratify and differentiate relationships. Not all our contacts are on the same level: some are friends, some are distant acquaintances, some are people we do not like very much. For-profit social networks treat all your contacts in the same way. Lyra puts you in control.
Enabling complex conversations
A message in a Lyra conversation looks like this:
The author's name and username are shown at the top. Lyra places no restrictions on your name, which can be changed to whatever you like, whenever you like. Each account has a fixed username, allowing it to be identified.
Complex conversations contain many messages. Traditional networks arrange messages in a line, so that one follows the other and you can only reply directly to the most recent. With Lyra, you can reply to anything: messages are arranged in a tree:
The tree system is very powerful. Conversations can branch and diverge, exploring different topics. Alternative angles can be discussed. Multiple threads can coexist in a single conversation. And it's easy to look back over the structure of the conversation, to see how it has evolved and progressed.
Since 1965, online conversation has taken a bewildering variety of forms on a dazzling array of platforms. Some have fallen into disuse; some have evolved into their modern incarnations. But most of our online conversation, especially between younger people, takes place on platforms which manipulate their users and compete amongst themselves to obtain the most precious resource: your time and attention.
Lyra offers a different approach. Together, its principles (audience control, reading control, conversation ownership, branching messages, and a nonprofit business model) combine to form a platform which prioritises expression and communication.
One of Lyra's principles is a reluctance to force itself on the user. We won't be doing any aggressive email marketing campaigns or sending annoying notifications. If you think online conversation can be done better, invite a couple of friends to Lyra and start a conversation. The rest is up to you.
These are the principles which guide Lyra's design.
- Conversation and discourse are important.
- The written word is the best way to have conversations on the Web.
- Your social experience should be controlled by you, not by an algorithm.
- A trustworthy service does not aim to maximise the amount of time its users spend using it.
- A conversation platform should do one thing, do it consistently, and do it well.
- Advertising is harmful to discourse.
- Profit is not the best motivation.
This is our agreement with you.
- We will never sell or give away any data concerning you. Not one single bit.
(Except where required by UK law. Like many other popular sites, we are using Google Fonts, which allows Google to see certain request headers - but none of your personal data.)
- We will never require you to view advertising.
- Lyra does not assume copyright over any messages you write. Copyright remains with you.
- We will always remain committed to supporting meaningful conversations.
- We will provide a stable, permanent platform which will always work more or less as it does now. We will not sell out or give up control of our platform.
- We are a non-profit. We will never charge you more than is necessary to cover our running costs and pay our engineers.
- We will never discriminate against any group on the basis of race, gender or anything else.
Lyra Communications Ltd
Covent Garden, London
UK registered company no. 10534260
In the interests of openness, we provide references to all of the findings reported in this article.
Bibliography GWI report, 2017
 Guardian article, 2017
 Register article, 2017
 Forbes article, 2017
 Medium article, 2016
 Guardian article, 2017
 Politics Home article, 2017
 Guardian article, 2017
 Guardian article, 2017
 Kosinski, Michal, et al. "Facebook as a research tool for the social sciences: Opportunities, challenges, ethical considerations, and practical guidelines." American Psychologist 70.6 (2015): 543.
 Motherboard article, 2017
 Lambiotte, Renaud, and Michal Kosinski. "Tracking the digital footprints of personality." Proceedings of the IEEE 102.12 (2014): 1934-1939.
 Youyou, Wu, Michal Kosinski, and David Stillwell. "Computer-based personality judgments are more accurate than those made by humans." Proceedings of the National Academy of Sciences 112.4 (2015): 1036-1040.
 Guardian article, 2017
 New Scientist article, 2017
 Pew Research Centre, 2017
 Van Royen, Kathleen, Karolien Poels, and Heidi Vandebosch. "Help, I am losing control! Examining the reporting of sexual harassment by adolescents to social networking sites." Cyberpsychology, Behavior, and Social Networking 19.1 (2016): 16-22.
 Telegraph article, 2016
 The Atlantic article, 2017
 Del Vicario, Michela, et al. "The spreading of misinformation online." Proceedings of the National Academy of Sciences 113.3 (2016): 554-559.
 Arad, Ayala, Ohad Barzilay, and Maayan Perchick. "The Impact of Facebook on Social Comparison and Happiness: Evidence from a Natural Experiment." (2017).
 Marder, Ben, et al. "The extended ‘chilling’effect of Facebook: The cold reality of ubiquitous social networking." Computers in Human Behavior 60 (2016): 582-592.
 Guardian article, 2017
 Ars Technica article, 2017
 Campaign article, 2017