Social Computing Symposium

Kate Crawford:

there is a huge growth of listening machines, from siri to cortana to google now

I focus on the social and ethical implications of large data systems, and I find listening machines fascinating

in 1650 the Jesuit Kirscher built the talking statue - a large shell in the wall that listens to the piazza

the talking statue was designed as a kind of elite magic - only the elites knew it could exist

listening machines have always been about power, about class and invisibility

whats different now is that listening machines are becoming ubiquitous and have personalities

all this recorded audio can provide a kind of time travel where you can go back through all people in your house

Hello Barbie's guts are controlled by toytalk - they listen to children's questions and come up with new answers

So Hello Barbie is the reverse of the talking statue - it listens in your house and sends it out to the public

how we handle frendship requires forgetting - with perfect archives we can't do this

Tim Hwang:

when does our interest in moderation conflict with our interest in privacy?

this is basic panopticon design 101 - it serves notice that you are being watched

so what is different from Leviathan? a platform implemented in code, and at a huge new scale

code moderations implies persistent identities and persistent record of action for the code to run

platforms have perverse incentives to maximise the EMO threshold - Engagement Maximising Outrage

platforms see themselves as platforms as passive infrastructure - CDA 230 encourages hands-off on this

the centre of gravity of commercial entities is that they will moderate less than we'd like them to

we could call for fragmentation - to have a variety of platforms that make different privacy/moderation tradeoffs

Kate Crawford:

the listening machine has now become a key driver for AI - Gate's machines that listen, see and understand

understanding is still hard, but we are getting better at machines that can see and hear

Tim Hwang:

a lot of deep learning and AI systems are very data-hungry processes - they need a lot to fulfil the promise

I use listening to mean data collection, but it also implies comprehension—that's not up to human moderator standard

the scale of these platforms make certain moderation systems difficult

Vivienne Ming:

my wife and I developed a system for U Phoenix and found it had to happen within 11 minutes to be effective

AI really ought to be Augmented Intelligence, not Artificial Intelligence

what we need is to signal to people thta they ned to intervene, but to respond fast they need to get the context

there is a strong interest in protecting the privacy of people being harassed, but not the harassers

q:

What is the social contract? Is there something that we have defined as a society to protect people?

Tim Hwang:

it becomes difficult to do this because of the perverse economics of scale

the ability of users to switch to another platform may not be enough to force the platforms to the table

Vivienne Ming:

education should be about producing happy, healthy impactful lives - explicitly, measurably and soley

we listened to 10,000 undergrads and 20,000 MBAs writings on the coursework discussion

we designed a system that learned economics and biology by reading the students' writing, then predicted grades

we were able to predict everyone's final score within 2% based on their writing on the discussion site

a bachelors' degree in CS from Stanford was only a very modest predictor of your skill as a programmer

we saw the same patterns of behaviour with skateboarders and with CEOs

cognitive ability and endogenous motivation were predictive of success

endogenously motivated people vastly outperform exogenously people, yet we structure classrooms exogenously

most sales for salespeople happen at end of cycle, but those who sell the 1st day of the cycle do better overall

we monitored preschoolers to predict their life outcomes - we put 3 mics in the classroom to detect speakers

however it is a cursed crystal ball - when you give people predictions of outcomes it reinforces the bad ones

the people who need this reinforcement the most are the least likely to act on it

Kate Crawford:

the idea of predicting life outcomes worries me - attaching children to predictions change how teachers behave

if I say "you're child is going to win a nobel prize" it makes it less likely to happen

Vivienne Ming:

the things I am talking about ought to scare people

people can change direction - I know I did; I don't want predictions to come true, but to change them

Judith Donath:

what if you didn't do the surveillance of kids and sent universally applicable messages to parents instead?

Suresh Venkat:

I am a computer scientist and I study algorithms. sometimes they learn patterns

the same algorithm that tells me I should buy the new star wars box set, could be used to deny me a loan

Algorithms make mistakes, and make mistakes that are hard to discover in high dimensional spaces

when a camera thinks that you have blinked, but you are asian and that is how your eyes look

when google decides that your name implies that you need ads for bail bonds, that is a problem

a machine learning algorithm is a recipe for making recipes, which means you can't know how it made its decision

you can't inspect and debug learned algorithms as they don't justify their parameters

can we detect if algorithms discriminate against minority groups? Have the been trained on unfair data?

can we express fairness and justice in such a way that algorithms can learn to be fair?

we want algorithms to be reasonable and logical like Spock, but overuled by the emotional Kirk

now algorithms are like 2-year-olds; the're inscrutable and you don't want them to have control, but they may grow up

even if you have a good algorithm the training data and the evaluation metrics can cause bias

if the algorithm has a small training set of eg women in tech jobs, it can discriminate against them

Vivienne Ming:

a recruiter will look at your name, your university and your job title at the last job—we look at 50k datapoints

Suresh Venkat:

we are not saying you should not use algorithms, but understand how they introduce bias

Tim Carmody:

the social contract is being replaced by the terms on service and this sets the new ideology

Suresh Venkat:

what we're seeing is the same old story expressed on a different platform - who has power?

we're seeing the same kind of amplification that happened with the printing press

Dennis Crowley:

I went to kate's listening machines summit last summer, and I want to talk about what we're doing at foursquare

we're doing listening, but not to audio, but to where phones go - where did you walk to, how long were you there

we're tryign to work out the magical things we can do with that

we sell some of the data, we sell some advertising, but what can we work out from this data? what should we do?

I want to make software like Scarlett Johansen's character in Her that whispers to you occasionally

what if you are walking down the street and it says "see that place? you should go there sometime"

or when you walk into the bar it whispers "3 people you like and one you don't are here already"

so many of these things you ask a question and it come to your aid when summoned - it can't suggest

what if the software could summon you and say Dennis, when you leave ITP you should go to this place

what if it could suggest a detour while you are walking home to an interesting store in the next street?

we have an opportunity to break people out of their habits - not just reactive systems

software could spot downtime and suggest detours

Latoya Peterson:

how come we always frame things as algorithms making things better? when the biases are irrational

why do we think we can apply rational solutions to fixing irrational responses?

there is this idea that more, better cleaner data will fix things

Suresh Venkat:

cleaner data may fix it, but we have the bad and dirty data that gets reinforced by the algorithms

Vivienne Ming:

if your data is talking to you you should see a doctor - you ask questions and get answers

you think your dashboard is giving you answers, but it is just a bunch of bargraphs inspect questions

Kevin Marks:

If you ask a person to justify a decision they will confabulate an answer to justify it

I'm worried that machines will be coded to confabulate plausible arguments too

Tom Igoe:

there are times when I don't trust foursquare, because it don't know when there is data that shoudl be just between us

Lili Cheng:

what would you send people when they are walking to work, could you send random ones? look left

Dennis Crowley:

what if you could make the software be flirty "I was just thinking about you - you should go to this place…"

we were trying to get people feel less freaked out about the algorithm knowing about us

Judith Donath:

I want to talk about trust and cute machines that want us to trust them

in 1950 Turing wrote a paper http://www.csee.umbc.edu/courses/471/papers/turing.pdf that suggested machines should talk like humans

Weizenbaum in 1964 made Eliza that could hold a conversation by asking questions

He hoped that people would see that this was not AI, but instead people did trust it

Weizenbaum gave up programming and wrote a novel warning of technology built by people what don't understand social

I gave a talk to a group of psychoanalysts and they were and audience that really paid attention and listened well

how many of you have worked as a waitress, where you have to fake emotion?

would you want to know what they really think of you?

this is an artificial seal that is a cuddly toy, but it can behave like an animal, curling up and responding

people with dementia are improved by contact with animals, but can't care for one—using an animated toy works for them

a tamagotchi was and electronic pet that you had to pay attention to, or it would die

the japanese tamagotchi would actually die, and not come back if neglected. The US one you could reset

if you're 3 then the world is a big psychedelic trip anyway, but when toys start actually talking that changes things

there was a russian experiment where they bred wolves for good personality, and it also gave them neonatal features

instead of using instructions to help us work tech, we make them look like babies so we take care of them

Hello Barbie is the deep sea angler fish of technology

by morphing your face with a political candidate's face you make the candidate more attractive to you