Arc 3
- Techler
- Nov 12, 2018
- 8 min read
Bias in technology prevents social good in numerous ways. How do we stop this bias?
With all the lies going around today (including the missions of companies and statistics about diversity and how they’re changing the world) how do we focus on truth and impact?
Truth, bias, and technology for social good
Arcs:
Business and technologist ideas:
Business endeavors and corporate philanthropy
Bias for our own ideas (don’t want to be told they’re wrong or don’t understand the problem)
Money and influence not being used in the best way
Sometimes the needs of communities are really recognized (gates foundation)
Sometimes they’re not - need more examples (I currently am just talking about companies that have recognized their issue and are trying to fix it)
Using social work to differentiate self from competition
Modern culture:
Biases become amplified with social media AND the people who already have money and power will give money and power to more people like them
Bias for people like us creates more bias
Polarization
Machine learning
How do we think outside the box, have better ideas, and educate ourselves
Checks and balances to our thought patterns
We don’t want truth because it might not fit with our
It’s difficult to acknowledge that you may be doing things wrong
Hard to focus on the long term
What’s the incentive in this economy to focus on tsi?
More customers if company differentiates self by being philanthropic
More diversity leads to better decision making - prove?
Slack, linkedin, gates foundation
Questions:
How does the bias people from privileged backgrounds have about what is best for the world prevent them from doing social good with their businesses? Is it possible to stop bias? What ways can people expose themselves to new ideas so they can potentially change their minds about things. Companies try to fix the issues, but what if they’re not willing to admit that maybe their technology does more harm than good regardless of what they do (facebook)? Do individuals that work for these companies/buy products from these companies have a responsibility to keep them in check? Are people incentivized to buy products they know are beneficial to the world, or does their own bias prevent them from seeing this. How much does it take for people to realize the harm of technology and start to take counteractive measures.
Humans have unconscious bias towards their own ideas and views of the world as well as towards people like them. But, it is when people are exposed to new ideas that they actually understand how their skills, connections, and influence really affect the world. Bias is hard to detect and much more difficult, perhaps impossible to counteract. This is due to the fact that people live in their own bubbles. It’s so much easier to think that the way one lives their life is good or has no effect on the world, but in reality, everything that every person does matters. Or does it? Well, we could say that since people have such vast networks nowadays and technology has such a significant, difficult to understand impact on people (esp. Social media), their actions matter more than ever. But, people are also increasingly polarized. Having this social media and bubble of the tech industry means that the people with the greatest influence are surrounded by others like them and are worlds aways from the majority of people: namely those that are negatively affected by the “solutions” they are building. (Uber guy ex.) Unconscious bias among the ranks of technologists is a significant factor in why tech seems to create more problems than it solves.
The role of unconscious bias in the pursuit of technology for social good.
Bias in technology prevents social good in numerous ways. How do we stop this bias?
Iteration, impact, growth, listening, diversity,
Show where bias is, attempted solutions - but also how they fall short of solving the problem
Ultimately, unconscious bias is part of human nature. It protects us from potentially detrimental ideas and brings order into our lives. But, without exposure to new ideas, it is impossible to understand what could potentially make our lives and the lives of those around us better. The recognition of bias is not really something that can be regulated by law. It is up to individuals to recognize where they can expose themselves to new ideas in their own lives and how they can use their influence (esp from a company standpoint) to help others do the same. Technology has so much influence. Platforms are used by everyone, and the technology that runs them plays a role in bias as well. The technology we have today determines the technology and lives of people tomorrow, so we need to be responsible for what we build, or we will create a positive feedback loop that further separates us from each other and makes it that much harder to realize the many ways our understanding of the world and people’s problems is so incredibly wrong.
Technology is stuff created by humans to solve problems using what we know about science and our lives.
Coming back from a day in San Francisco and too lazy to figure out the bus system, my friends and I opted for the cheapest, easiest form of transportation. We opened up the uber app and waited a short 5 minutes for our ride to come. Looking at the driver’s profile, we noticed that this man had driven somewhere along the lines of 20,000 rides for Uber. We were astonished that this number was even possible. We asked him about it, and learned that he had been driving for 3 years full time. It took some probing questions to really get his thoughts about Uber out, but our curiousness and empathy for his situation got him to open up. He felt trapped: He had started working for Uber because of the promise that it would help him get ahead of the insane rent in San Francisco, but he ended up stuck with car payments and barely able to make payment on his rent and help out his family. He told us about how he’s hungry and that Uber takes a really large cut (60%) of his earnings, but it varies a lot and he never knows how much he is going to be paid. We were especially touched when I asked him what he thought was going on in the minds of Uber executives. He said: They don’t really see us as people just because we’re at the bottom of the ladder. They’re blood-sucking vampires focussed on themselves and money, and they don’t even realize who they’re hurting and how much.
I volunteer for a student organization here a Berkeley called blueprint. Our mission is to build technology for social good. We work with nonprofits that we believe understand the needs of certain, often underserved groups of people and build technological solutions that help them operate at scale and impact lives in a beneficial way. Tech for social good seems like a simple slogan, you just volunteer and write some code and the world will be a better place right? Not necessarily. We choose the organizations that we think will be most impacted by our help. Our technology may not be as impactful as it can be (either because it doesn’t serve the needs of the nonprofit or because the nonprofit’s ideas about how to influence certain groups positively is incorrect). If this is the case, then we have not used our skills as developers and project leaders to our full advantage. In a way, we have failed people that could use our help. This is an idea I didn’t fully understand the complexity of before joining, but it is so interesting to see how much thought goes into the best ways to make an impact. If we just focussed on making the fastest, best looking websites, no one would care. What matters is how people’s lives are changed by what we build.
Business endeavors and corporate philanthropy
People are biased towards their own ideas and anything they contribute to, so they won’t pay attention if their company may be leading the world in a detrimental direction if they’re convinced the work they do is good
These days, the people with money and influence are also the people that build technological solutions
There is a bigger gap between classes, so certain groups of people make decisions that affect everyone
Sometimes the needs of communities are really recognized
Either in the case of nonprofits being formed or tech companies being philanthropic
Sometimes they’re not
This podcast was all about the unintended consequences of technology such as the dark side of youtube promoting terrorism and AI making communism efficient.
Kids were seeing weird unboxing things on youtube and ended up being exposed to things they shouldn’t have because of the keywords
Youtube wants to give users videos that are as sensational as possible because it encourages the best reactions and people will watch more
In this case, more views does not mean more people are learning from or benefitting from the service.
Youtube seems like this giant experiment on humans, yet we are not looking at the results and realizing how bad the experiment is going
Terrorists use the internet to recruit people to join their organizations
The people who build the platforms need to keep in mind that people with bad intentions are using their platforms as well.
Ideas for Lenses
Controversy + conceptual phrase: where there’s the most intellectual heat? Why is it not so simple?
Move fast and break things start-up culture and focusing on short term gains for companies
Products are built faster and people feel like they’re helping the world or making progress
There may be unintended consequences where the full impacts of products are not thought about and there is nothing set up to regulate their effect
Evidence of this!
Do the people with the most skill and connections really know what is best for the majority of people? - probably not
Breaking developments + conceptual phrase: add to the conversation by bringing news / insight / primary sources / new trends
Tech money going in to prop C, gates foundation, linkedin, pinterest, fb
Horizontal lens + conceptual phrase: gather an array of sources, put them in conversation while focused on the concept, to ground us (and you) in context
How bunch of different companies’ products affect certain people and what they have done and not done to make those effects more beneficial.
Historical lens + conceptual phrase: provide perspective with history focused through a concept
The concept that people feel a responsibility towards society and use their own wealth, no matter how they earned it, for societal progress.
In the past, things were a little more simple. People wanted cars and railroads, and the world was just being developed. It made sense to capitalize on those markets
The tech industry is a little different. It seems like we’re searching for every little thing that could be just a bit more convenient.
Philanthropists from the tech industry and using their power and money to help people as well as giving people opportunities by hiring them for their company.
There is a lot more weight on social justice now, and jobs are competitive
There has also been a rise in nonprofits and a focus on charity work
It didn’t use to be so easy to start a company. Now everyone can do it but there’s also lots of opportunity for fraud and catastrophic failure.
Labor unions played a role in allowing diverse groups of people to be represented, but now they don’t really exist and the opinions of these masses of people are not heard?
Vertical lens + conceptual phrase: how might an in-depth case study help us in a different way?
In depth - gates foundation went from uselessly giving people computers to identifying needs well
Conceptual lens: what if you approached your topic with an outside thinker, like Goffman, or from a different industry/field/discipline, like film if you’re writing about music?
Psychological ideas about bias and refusal to listen to other people’s opinions.
Comments