top of page

WHAT THE TECH

How do we find meaning among the machines?

Hey there, I'm a computer science undergrad at Berkeley. Thinking about my opportunities for using my CS skills in the future, I find myself asking a lot of questions. How do I do work that is actually meaningful and helpful to people? And, how can technology bridge barriers between people and scale bright ideas?
This futuristic world we live in can be difficult to understand, but it is important to ask these key questions and focus on impact. This blog is called What the Tech because, frankly, What the Tech is Tech... and Life... and Everything... I'm not sure. However, in these blog posts you'll find my attempts to be a heckler (or techler haha) by questioning, challenging, and trying to understand what the tech is happening with today's biggest ideas.
Let's see where this takes us! :P

Home: About

PROJECTS

file-20180509-34021-1t9q8r0.jpg

PROJECT I

To Beep or Not to Beep: Why Understanding Human Consciousness Means Better Robots

Currently, the information processing, logical side of the human mind is the part that is mainly understood and used to make helpful computers, but more complexities exist in the subconscious level that prevent technology from becoming “human.” However, artificial intelligence has come a long way towards replicating creativity, analysis, and intelligence and even offers humans an opportunity to improve their lives by changing or uploading their brains. With all these technological advances, what will it take to have a future where robots and people both have consciousness? And, if this happens, how can these two groups best function together to maximize prosperity?

Screen Shot 2018-11-28 at 12.44.56 PM.pn

PROJECT II

Slidedeck on Technology and Philanthropy

A presentation of research related to corporate philanthropy, psychological ideas such as argumentative theory, and why advancements in technology have great potential to damage society. Project III is a much more developed version of this project.

Screen Shot 2018-11-28 at 12.55.50 PM.pn

PROJECT III

The Social Good Revolution: How Corporate Responsibility can Enable Technological Innovation and Beneficially Impact Society

Abstract: In this day and age, technology is affecting people in ways it never has before. Artificial intelligence is replacing human decision making in key areas, the sensational ways in which companies use technology incur short term gains while corrupting entire populations, and unmoderated sides of the internet decrease participant responsibility and hateful groups to reach others under the guise of anonymity. All these advances pose new and concerning ethical and moral questions we’ve never seen before. The decision to build technology with the benefit of society in mind may change from being the “right” thing to being the only way technologists, companies, and the people of the world can prevent self destruction. This social good revolution is on the horizon because companies like Uber and Lyft are becoming more competitive in the realm of total societal impact. Also, companies like Pinterest and LinkedIn are realizing where their algorithms fall short of serving the needs of their customers, while others like Google are hiring teams of ethicists and setting goals for themselves regarding their impact on the world. When technology companies and their engineers are aware of the unintended consequences of their new technology, they can build better products that make everyone better off and keep the company sustained in the long term. Mission-driven development is taking off because the future of the world is increasingly at stake. However, making an impact requires more than just intention. Argumentative theory explains that individuals must interact and compare ideas in order to dismantle their confirmation bias. People are starting to care more about working for companies that make ethical decisions. They can contribute by questioning corporate intentions, expressing their opinions, and feeling confident in the social impact of the products they build. Companies can also encourage this kind of culture among their ranks by aiming for diversity of thought while hiring and being open with their decision making. These efforts incentivize engineers to work for companies and make the technology they build better satisfy the mission.
Keywords: Technology, Corporate Philanthropy, Artificial Intelligence, Ethics of Technology, Mission Driven Development, Human Decision, Argumentative Theory, Confirmation Bias, Free Speech, Total Societal Impact, Corporate Social Responsibility, Pinterest, LinkedIn, Google, Slack Uber, Lyft, Algorithmic Bias, Diversity and Inclusion, Hiring Practices

Home: Projects
Home: Blog2
Search
  • Writer's pictureTechler

Arc 3

Bias in technology prevents social good in numerous ways. How do we stop this bias?

With all the lies going around today (including the missions of companies and statistics about diversity and how they’re changing the world) how do we focus on truth and impact?

Truth, bias, and technology for social good

Arcs:

Business and technologist ideas:

Business endeavors and corporate philanthropy

Bias for our own ideas (don’t want to be told they’re wrong or don’t understand the problem)

Money and influence not being used in the best way

Sometimes the needs of communities are really recognized (gates foundation)

Sometimes they’re not - need more examples (I currently am just talking about companies that have recognized their issue and are trying to fix it)

Using social work to differentiate self from competition

Modern culture:

Biases become amplified with social media AND the people who already have money and power will give money and power to more people like them

Bias for people like us creates more bias

Polarization

Machine learning

How do we think outside the box, have better ideas, and educate ourselves

Checks and balances to our thought patterns

We don’t want truth because it might not fit with our

It’s difficult to acknowledge that you may be doing things wrong

Hard to focus on the long term

What’s the incentive in this economy to focus on tsi?

More customers if company differentiates self by being philanthropic

More diversity leads to better decision making - prove?

Slack, linkedin, gates foundation

Questions:

How does the bias people from privileged backgrounds have about what is best for the world prevent them from doing social good with their businesses? Is it possible to stop bias? What ways can people expose themselves to new ideas so they can potentially change their minds about things. Companies try to fix the issues, but what if they’re not willing to admit that maybe their technology does more harm than good regardless of what they do (facebook)? Do individuals that work for these companies/buy products from these companies have a responsibility to keep them in check? Are people incentivized to buy products they know are beneficial to the world, or does their own bias prevent them from seeing this. How much does it take for people to realize the harm of technology and start to take counteractive measures.

Humans have unconscious bias towards their own ideas and views of the world as well as towards people like them. But, it is when people are exposed to new ideas that they actually understand how their skills, connections, and influence really affect the world. Bias is hard to detect and much more difficult, perhaps impossible to counteract. This is due to the fact that people live in their own bubbles. It’s so much easier to think that the way one lives their life is good or has no effect on the world, but in reality, everything that every person does matters. Or does it? Well, we could say that since people have such vast networks nowadays and technology has such a significant, difficult to understand impact on people (esp. Social media), their actions matter more than ever. But, people are also increasingly polarized. Having this social media and bubble of the tech industry means that the people with the greatest influence are surrounded by others like them and are worlds aways from the majority of people: namely those that are negatively affected by the “solutions” they are building. (Uber guy ex.) Unconscious bias among the ranks of technologists is a significant factor in why tech seems to create more problems than it solves.

The role of unconscious bias in the pursuit of technology for social good.

Bias in technology prevents social good in numerous ways. How do we stop this bias?

Iteration, impact, growth, listening, diversity,

Show where bias is, attempted solutions - but also how they fall short of solving the problem

Ultimately, unconscious bias is part of human nature. It protects us from potentially detrimental ideas and brings order into our lives. But, without exposure to new ideas, it is impossible to understand what could potentially make our lives and the lives of those around us better. The recognition of bias is not really something that can be regulated by law. It is up to individuals to recognize where they can expose themselves to new ideas in their own lives and how they can use their influence (esp from a company standpoint) to help others do the same. Technology has so much influence. Platforms are used by everyone, and the technology that runs them plays a role in bias as well. The technology we have today determines the technology and lives of people tomorrow, so we need to be responsible for what we build, or we will create a positive feedback loop that further separates us from each other and makes it that much harder to realize the many ways our understanding of the world and people’s problems is so incredibly wrong.

Technology is stuff created by humans to solve problems using what we know about science and our lives.

Coming back from a day in San Francisco and too lazy to figure out the bus system, my friends and I opted for the cheapest, easiest form of transportation. We opened up the uber app and waited a short 5 minutes for our ride to come. Looking at the driver’s profile, we noticed that this man had driven somewhere along the lines of 20,000 rides for Uber. We were astonished that this number was even possible. We asked him about it, and learned that he had been driving for 3 years full time. It took some probing questions to really get his thoughts about Uber out, but our curiousness and empathy for his situation got him to open up. He felt trapped: He had started working for Uber because of the promise that it would help him get ahead of the insane rent in San Francisco, but he ended up stuck with car payments and barely able to make payment on his rent and help out his family. He told us about how he’s hungry and that Uber takes a really large cut (60%) of his earnings, but it varies a lot and he never knows how much he is going to be paid. We were especially touched when I asked him what he thought was going on in the minds of Uber executives. He said: They don’t really see us as people just because we’re at the bottom of the ladder. They’re blood-sucking vampires focussed on themselves and money, and they don’t even realize who they’re hurting and how much.

I volunteer for a student organization here a Berkeley called blueprint. Our mission is to build technology for social good. We work with nonprofits that we believe understand the needs of certain, often underserved groups of people and build technological solutions that help them operate at scale and impact lives in a beneficial way. Tech for social good seems like a simple slogan, you just volunteer and write some code and the world will be a better place right? Not necessarily. We choose the organizations that we think will be most impacted by our help. Our technology may not be as impactful as it can be (either because it doesn’t serve the needs of the nonprofit or because the nonprofit’s ideas about how to influence certain groups positively is incorrect). If this is the case, then we have not used our skills as developers and project leaders to our full advantage. In a way, we have failed people that could use our help. This is an idea I didn’t fully understand the complexity of before joining, but it is so interesting to see how much thought goes into the best ways to make an impact. If we just focussed on making the fastest, best looking websites, no one would care. What matters is how people’s lives are changed by what we build.

Business endeavors and corporate philanthropy

People are biased towards their own ideas and anything they contribute to, so they won’t pay attention if their company may be leading the world in a detrimental direction if they’re convinced the work they do is good

These days, the people with money and influence are also the people that build technological solutions

There is a bigger gap between classes, so certain groups of people make decisions that affect everyone

Sometimes the needs of communities are really recognized

Either in the case of nonprofits being formed or tech companies being philanthropic

Sometimes they’re not

This podcast was all about the unintended consequences of technology such as the dark side of youtube promoting terrorism and AI making communism efficient.

Kids were seeing weird unboxing things on youtube and ended up being exposed to things they shouldn’t have because of the keywords

Youtube wants to give users videos that are as sensational as possible because it encourages the best reactions and people will watch more

In this case, more views does not mean more people are learning from or benefitting from the service.

Youtube seems like this giant experiment on humans, yet we are not looking at the results and realizing how bad the experiment is going

Terrorists use the internet to recruit people to join their organizations

The people who build the platforms need to keep in mind that people with bad intentions are using their platforms as well.

Ideas for Lenses

Controversy + conceptual phrase: where there’s the most intellectual heat? Why is it not so simple?

Move fast and break things start-up culture and focusing on short term gains for companies

Products are built faster and people feel like they’re helping the world or making progress

There may be unintended consequences where the full impacts of products are not thought about and there is nothing set up to regulate their effect

Evidence of this!

Do the people with the most skill and connections really know what is best for the majority of people? - probably not

Breaking developments + conceptual phrase: add to the conversation by bringing news / insight / primary sources / new trends

Tech money going in to prop C, gates foundation, linkedin, pinterest, fb

Horizontal lens + conceptual phrase: gather an array of sources, put them in conversation while focused on the concept, to ground us (and you) in context

How bunch of different companies’ products affect certain people and what they have done and not done to make those effects more beneficial.

Historical lens + conceptual phrase: provide perspective with history focused through a concept

The concept that people feel a responsibility towards society and use their own wealth, no matter how they earned it, for societal progress.

In the past, things were a little more simple. People wanted cars and railroads, and the world was just being developed. It made sense to capitalize on those markets

The tech industry is a little different. It seems like we’re searching for every little thing that could be just a bit more convenient.

Philanthropists from the tech industry and using their power and money to help people as well as giving people opportunities by hiring them for their company.

There is a lot more weight on social justice now, and jobs are competitive

There has also been a rise in nonprofits and a focus on charity work

It didn’t use to be so easy to start a company. Now everyone can do it but there’s also lots of opportunity for fraud and catastrophic failure.

Labor unions played a role in allowing diverse groups of people to be represented, but now they don’t really exist and the opinions of these masses of people are not heard?

Vertical lens + conceptual phrase: how might an in-depth case study help us in a different way?

In depth - gates foundation went from uselessly giving people computers to identifying needs well

Conceptual lens: what if you approached your topic with an outside thinker, like Goffman, or from a different industry/field/discipline, like film if you’re writing about music?

Psychological ideas about bias and refusal to listen to other people’s opinions.

4 views0 comments

Recent Posts

See All
bottom of page