Arc 4
- Techler
- Nov 14, 2018
- 6 min read
Outline of Questions - historical lens? Robber barons + INTERNET unintended consequences
How does unconscious bias lead to detrimental consequences
In what ways do the methods companies use to measure success fail to factor in external costs?
Bias controls our perception due to sunken-cost fallacy, confirmation bias, ego
Argumentative theory - we need other people
What is the incentive for companies to pursue philanthropic projects and total societal impact?
(move?) philanthropy of robber barons (to fill less profitable needs) during second industrial revolution combined with improvement of tech in general leading to better quality of life for people
The start of the internet was all about connecting people and giving them information (what about unintended consequences and how this changed)
Open air bnbs, lyft, - companies differentiate themselves for better branding
Gates foundation, nonprofits - have extra money and power and want to do good with it
Historical lens - the rise of philanthropic individuals during the second industrial revolution and how this still holds true but has also shifted to companies
Is having a good image/doing good really profitable?
Total shareholder return + shareholder theory -> Total societal impact, stakeholder theory
Shift in responsibility - consumers are paying attention
How do non-profits and philanthropists ensure they are actually fulfilling a need?
So many more startups - people really want to have an impact
Sometimes they don’t serve a need or must be fraudulent in order to succeed.
eh
The internet and social media seem like the future of everything!
Promise of connecting the world
Historical lens - What people thought the internet was going to be like
The rise of algorithms
We have no idea what’s going on inside the black box
We’re not using the right criteria to make decisions
Biased decisions in the past
Sensationalism and profits
Unintended consequences
Youtube
Linkedin hiring only people like them - leads into next segment with their tools for counteracting this and slack
Why do the people with the most power and influence in the tech industry not understand the needs of all groups?
Understanding needs and philanthropy is a complicated issue
People with power and networks gain more power and networks
Linkedin and Slack - trying to do some work to counteract this
How can people expand their network and have more diversity of thought in their lives?
It’s not just companies, individuals should be responsible for their impact
Go back to argumentative theory
Bringing in people that are from diverse backgrounds and understand certain issues deeply
The importance of making those people feel included (below)
How can companies increase diversity in their organizations and make those people feel included in the right conversations?
Slack ex
Social media - using it to connect people from different backgrounds rather than
Loop back to companies developing technology that hinders diversity and free thought (?? how do I do this)
What is the big idea here in terms of society?
AI + social good speaker panel
It isn’t up to companies or any one person to decide what is socially good and what direction we want to go as a society
No one should let algorithms define their lives if they don’t like it
This is ultimately a political issue and we need to keep talking through it
Speakers said they were never “solving” issues, but that doesn’t mean it’s impossible to make genuine progress with the right intentions and ethical basis.
Key connections: How does diversity of thought lead to a better pursuit of philanthropic progress?
How does diversity in the workplace lead to tech (platforms?) reaching diverse groups of people?
Notes from AI for Social Good
During an event titled “AI for Social Good” hosted by my Blueprint, I facilitated a speaker panel with accomplished individuals from multiple fields that are focused on using AI for the betterment of society.
Christine Robson - a product manager for Google's machine learning teams
Healthcare
detecting cancer by looking at lots of images
Recommended readmittance to ER based on many factors (not just a black box diagnosis) that doctors can look into
Earthquake aftershocks
Whales
Underwater audio turned into image to be classified
Goal is to track the whales
Josh Kroll - postdoc specializing in governance and computer systems at the U.C. Berkeley School of Information
Cognitive bias is built into our brains and is difficult to observe
Even if algorithms don’t explicitly know your race, they can get a pretty good idea from zip codes and other factors
Just not using race isn’t the silver buller
ML presents a problem when its results are allocative or representational
People are affected by them basically
Ilya Kirnos - CTO and founding partner of SignalFire, a venture capital firm with a focus on technology and AI investments
As engineers, they have a better understanding of what technology is a fad and what will actually take off.
They have large endowments and invest in 10 year increments
Looking more at long term
There are different types of AI
Low stakes - underhyped
Google ads, search (he used to work on)
High stakes - overhyped
Cars, medicine
Could have bad consequences if they mess up
Need to be proven on more low stakes scales
Invested in self driving forklifts
10,000 forklift injuries per year
Short run - jobs are lost, industry really affected
Long run - less injuries and lives lost!
They actually call up warehouses and ask if there’s really a need for the tech
Ensure impact
There is a sweet spot for autonomy that is actually going to be accepted into the world
Their goal is to back transformational companies
Not necessarily to pursue social good, but they do consider externalities
They would feel bad hurting the world and their investors wouldn’t like it
Q&A panel:
What kind of regulations should be put on AI, if any?
What is the definition of social good companies are going for and are they defining it correctly?
Notes on the rest of
This podcast was all about the unintended consequences of technology such as the dark side of youtube promoting terrorism and AI making communism efficient.
Kids were seeing weird unboxing things on youtube and ended up being exposed to things they shouldn’t have because of the keywords
Youtube wants to give users videos that are as sensational as possible because it encourages the best reactions and people will watch more
In this case, more views does not mean more people are learning from or benefitting from the service.
Youtube seems like this giant experiment on humans, yet we are not looking at the results and realizing how bad the experiment is going
Terrorists use the internet to recruit people to join their organizations
The people who build the platforms need to keep in mind that people with bad intentions are using their platforms as well.
Edward Tenner: Can We View Technology's Unintended Consequences In A Positive Light?
“the positive outcomes that we expect are usually not nearly as positive as we imagine them. But also, the negative things don't turn out in the same way. For example, we tend to think that what is going on is just going to go on and on and get worse and worse, or it's going to go on and on and get better and better. And reality usually has surprises for us.”
“I don't think it's really terribly helpful, unless you're actually working on something concretely to deal with a problem, to worry too much about the problem if there isn't something that you can do about it.”
Opportunity arises when things go wrong (we are able to learn)
Good things do not always turn out as good as expected
Why worry about things we can’t change when the results will b different than expected anyway?
Everything happens for a reason
Titanic examples, black plague example
James Bridle: What Do Kids' Videos on YouTube Reveal About the Internet's Dark Side?
Dark side of youtube where children are exposed to potentially dark stuff and people are making super long videos and pirating content in order to get ad revenue
“we've decided to optimize for reactions and sensation over other forms of kind of verifying knowledge”
There’s other things we could decide to do
The tech we build is an experiment that we can then use the results of to change
Is what gets us watching really the best thing we should be shown?
Yasmin Green: How Did The Internet Become A Platform For Hate Groups?
Thought the internet was going to be perfect
“We were just like, we're connecting people to information, to each other. This is going to transform democracies, and it's going to empower populations.”
ISIS took off because they had a lot of influence on the internet and radicalizing people who had questions which there was no answer to
“And it's easy and dangerous to say, well, there are good people and bad people because it ends up - the prescription is really punitive technologies or policies, which is, let's suspend people or let's censor people or let's punish people.”
There is no easy fix to the problem - we can’t just push everything we don’t like to the dark web
The key is access to information
“Like, it's not enough just to focus on your platform and the, you know, the micro-instances that you see. Like, you have to think about terrorist groups and their goal and their strategies and what they're doing across the whole Internet. And you have to have a big-picture view. We can't be so tunnel-visioned anymore. The more that we do that, the better we'll be at spotting problems early.“
It’s really important in general to think about the potential use of technology by hate groups and people with bad intentions
Anticipating and correcting for the actions of these people and constantly rethinking our technology is the only thing we can do
Comentarios