top of page

WHAT THE TECH

How do we find meaning among the machines?

Hey there, I'm a computer science undergrad at Berkeley. Thinking about my opportunities for using my CS skills in the future, I find myself asking a lot of questions. How do I do work that is actually meaningful and helpful to people? And, how can technology bridge barriers between people and scale bright ideas?
This futuristic world we live in can be difficult to understand, but it is important to ask these key questions and focus on impact. This blog is called What the Tech because, frankly, What the Tech is Tech... and Life... and Everything... I'm not sure. However, in these blog posts you'll find my attempts to be a heckler (or techler haha) by questioning, challenging, and trying to understand what the tech is happening with today's biggest ideas.
Let's see where this takes us! :P

Home: About

PROJECTS

file-20180509-34021-1t9q8r0.jpg

PROJECT I

To Beep or Not to Beep: Why Understanding Human Consciousness Means Better Robots

Currently, the information processing, logical side of the human mind is the part that is mainly understood and used to make helpful computers, but more complexities exist in the subconscious level that prevent technology from becoming “human.” However, artificial intelligence has come a long way towards replicating creativity, analysis, and intelligence and even offers humans an opportunity to improve their lives by changing or uploading their brains. With all these technological advances, what will it take to have a future where robots and people both have consciousness? And, if this happens, how can these two groups best function together to maximize prosperity?

Screen Shot 2018-11-28 at 12.44.56 PM.pn

PROJECT II

Slidedeck on Technology and Philanthropy

A presentation of research related to corporate philanthropy, psychological ideas such as argumentative theory, and why advancements in technology have great potential to damage society. Project III is a much more developed version of this project.

Screen Shot 2018-11-28 at 12.55.50 PM.pn

PROJECT III

The Social Good Revolution: How Corporate Responsibility can Enable Technological Innovation and Beneficially Impact Society

Abstract: In this day and age, technology is affecting people in ways it never has before. Artificial intelligence is replacing human decision making in key areas, the sensational ways in which companies use technology incur short term gains while corrupting entire populations, and unmoderated sides of the internet decrease participant responsibility and hateful groups to reach others under the guise of anonymity. All these advances pose new and concerning ethical and moral questions we’ve never seen before. The decision to build technology with the benefit of society in mind may change from being the “right” thing to being the only way technologists, companies, and the people of the world can prevent self destruction. This social good revolution is on the horizon because companies like Uber and Lyft are becoming more competitive in the realm of total societal impact. Also, companies like Pinterest and LinkedIn are realizing where their algorithms fall short of serving the needs of their customers, while others like Google are hiring teams of ethicists and setting goals for themselves regarding their impact on the world. When technology companies and their engineers are aware of the unintended consequences of their new technology, they can build better products that make everyone better off and keep the company sustained in the long term. Mission-driven development is taking off because the future of the world is increasingly at stake. However, making an impact requires more than just intention. Argumentative theory explains that individuals must interact and compare ideas in order to dismantle their confirmation bias. People are starting to care more about working for companies that make ethical decisions. They can contribute by questioning corporate intentions, expressing their opinions, and feeling confident in the social impact of the products they build. Companies can also encourage this kind of culture among their ranks by aiming for diversity of thought while hiring and being open with their decision making. These efforts incentivize engineers to work for companies and make the technology they build better satisfy the mission.
Keywords: Technology, Corporate Philanthropy, Artificial Intelligence, Ethics of Technology, Mission Driven Development, Human Decision, Argumentative Theory, Confirmation Bias, Free Speech, Total Societal Impact, Corporate Social Responsibility, Pinterest, LinkedIn, Google, Slack Uber, Lyft, Algorithmic Bias, Diversity and Inclusion, Hiring Practices

Home: Projects
Home: Blog2
Search
  • Writer's pictureTechler

Arc 4

Outline of Questions - historical lens? Robber barons + INTERNET unintended consequences

How does unconscious bias lead to detrimental consequences

In what ways do the methods companies use to measure success fail to factor in external costs?

Bias controls our perception due to sunken-cost fallacy, confirmation bias, ego

Argumentative theory - we need other people

What is the incentive for companies to pursue philanthropic projects and total societal impact?

(move?) philanthropy of robber barons (to fill less profitable needs) during second industrial revolution combined with improvement of tech in general leading to better quality of life for people

The start of the internet was all about connecting people and giving them information (what about unintended consequences and how this changed)

Open air bnbs, lyft, - companies differentiate themselves for better branding

Gates foundation, nonprofits - have extra money and power and want to do good with it

Historical lens - the rise of philanthropic individuals during the second industrial revolution and how this still holds true but has also shifted to companies

Is having a good image/doing good really profitable?

Total shareholder return + shareholder theory -> Total societal impact, stakeholder theory

Shift in responsibility - consumers are paying attention

How do non-profits and philanthropists ensure they are actually fulfilling a need?

So many more startups - people really want to have an impact

Sometimes they don’t serve a need or must be fraudulent in order to succeed.

eh

The internet and social media seem like the future of everything!

Promise of connecting the world

Historical lens - What people thought the internet was going to be like

The rise of algorithms

We have no idea what’s going on inside the black box

We’re not using the right criteria to make decisions

Biased decisions in the past

Sensationalism and profits

Unintended consequences

Youtube

Pinterest

Linkedin hiring only people like them - leads into next segment with their tools for counteracting this and slack

Why do the people with the most power and influence in the tech industry not understand the needs of all groups?

Understanding needs and philanthropy is a complicated issue

People with power and networks gain more power and networks

Linkedin and Slack - trying to do some work to counteract this

How can people expand their network and have more diversity of thought in their lives?

It’s not just companies, individuals should be responsible for their impact

Go back to argumentative theory

Bringing in people that are from diverse backgrounds and understand certain issues deeply

The importance of making those people feel included (below)

How can companies increase diversity in their organizations and make those people feel included in the right conversations?

Slack ex

Social media - using it to connect people from different backgrounds rather than

Loop back to companies developing technology that hinders diversity and free thought (?? how do I do this)

What is the big idea here in terms of society?

AI + social good speaker panel

It isn’t up to companies or any one person to decide what is socially good and what direction we want to go as a society

No one should let algorithms define their lives if they don’t like it

This is ultimately a political issue and we need to keep talking through it

Speakers said they were never “solving” issues, but that doesn’t mean it’s impossible to make genuine progress with the right intentions and ethical basis.

Key connections: How does diversity of thought lead to a better pursuit of philanthropic progress?

How does diversity in the workplace lead to tech (platforms?) reaching diverse groups of people?

Notes from AI for Social Good

During an event titled “AI for Social Good” hosted by my Blueprint, I facilitated a speaker panel with accomplished individuals from multiple fields that are focused on using AI for the betterment of society.

Christine Robson - a product manager for Google's machine learning teams

Healthcare

detecting cancer by looking at lots of images

Recommended readmittance to ER based on many factors (not just a black box diagnosis) that doctors can look into

Earthquake aftershocks

Whales

Underwater audio turned into image to be classified

Goal is to track the whales

Josh Kroll - postdoc specializing in governance and computer systems at the U.C. Berkeley School of Information

Cognitive bias is built into our brains and is difficult to observe

Even if algorithms don’t explicitly know your race, they can get a pretty good idea from zip codes and other factors

Just not using race isn’t the silver buller

ML presents a problem when its results are allocative or representational

People are affected by them basically

Ilya Kirnos - CTO and founding partner of SignalFire, a venture capital firm with a focus on technology and AI investments

As engineers, they have a better understanding of what technology is a fad and what will actually take off.

They have large endowments and invest in 10 year increments

Looking more at long term

There are different types of AI

Low stakes - underhyped

Google ads, search (he used to work on)

High stakes - overhyped

Cars, medicine

Could have bad consequences if they mess up

Need to be proven on more low stakes scales

Invested in self driving forklifts

10,000 forklift injuries per year

Short run - jobs are lost, industry really affected

Long run - less injuries and lives lost!

They actually call up warehouses and ask if there’s really a need for the tech

Ensure impact

There is a sweet spot for autonomy that is actually going to be accepted into the world

Their goal is to back transformational companies

Not necessarily to pursue social good, but they do consider externalities

They would feel bad hurting the world and their investors wouldn’t like it

Q&A panel:

What kind of regulations should be put on AI, if any?

What is the definition of social good companies are going for and are they defining it correctly?

Notes on the rest of

This podcast was all about the unintended consequences of technology such as the dark side of youtube promoting terrorism and AI making communism efficient.

Kids were seeing weird unboxing things on youtube and ended up being exposed to things they shouldn’t have because of the keywords

Youtube wants to give users videos that are as sensational as possible because it encourages the best reactions and people will watch more

In this case, more views does not mean more people are learning from or benefitting from the service.

Youtube seems like this giant experiment on humans, yet we are not looking at the results and realizing how bad the experiment is going

Terrorists use the internet to recruit people to join their organizations

The people who build the platforms need to keep in mind that people with bad intentions are using their platforms as well.

Edward Tenner: Can We View Technology's Unintended Consequences In A Positive Light?

“the positive outcomes that we expect are usually not nearly as positive as we imagine them. But also, the negative things don't turn out in the same way. For example, we tend to think that what is going on is just going to go on and on and get worse and worse, or it's going to go on and on and get better and better. And reality usually has surprises for us.”

“I don't think it's really terribly helpful, unless you're actually working on something concretely to deal with a problem, to worry too much about the problem if there isn't something that you can do about it.”

Opportunity arises when things go wrong (we are able to learn)

Good things do not always turn out as good as expected

Why worry about things we can’t change when the results will b different than expected anyway?

Everything happens for a reason

Titanic examples, black plague example

James Bridle: What Do Kids' Videos on YouTube Reveal About the Internet's Dark Side?

Dark side of youtube where children are exposed to potentially dark stuff and people are making super long videos and pirating content in order to get ad revenue

“we've decided to optimize for reactions and sensation over other forms of kind of verifying knowledge”

There’s other things we could decide to do

The tech we build is an experiment that we can then use the results of to change

Is what gets us watching really the best thing we should be shown?

Yasmin Green: How Did The Internet Become A Platform For Hate Groups?

Thought the internet was going to be perfect

“We were just like, we're connecting people to information, to each other. This is going to transform democracies, and it's going to empower populations.”

ISIS took off because they had a lot of influence on the internet and radicalizing people who had questions which there was no answer to

“And it's easy and dangerous to say, well, there are good people and bad people because it ends up - the prescription is really punitive technologies or policies, which is, let's suspend people or let's censor people or let's punish people.”

There is no easy fix to the problem - we can’t just push everything we don’t like to the dark web

The key is access to information

“Like, it's not enough just to focus on your platform and the, you know, the micro-instances that you see. Like, you have to think about terrorist groups and their goal and their strategies and what they're doing across the whole Internet. And you have to have a big-picture view. We can't be so tunnel-visioned anymore. The more that we do that, the better we'll be at spotting problems early.“

It’s really important in general to think about the potential use of technology by hate groups and people with bad intentions

Anticipating and correcting for the actions of these people and constantly rethinking our technology is the only thing we can do

2 views0 comments

Recent Posts

See All
bottom of page