top of page

WHAT THE TECH

How do we find meaning among the machines?

Hey there, I'm a computer science undergrad at Berkeley. Thinking about my opportunities for using my CS skills in the future, I find myself asking a lot of questions. How do I do work that is actually meaningful and helpful to people? And, how can technology bridge barriers between people and scale bright ideas?
This futuristic world we live in can be difficult to understand, but it is important to ask these key questions and focus on impact. This blog is called What the Tech because, frankly, What the Tech is Tech... and Life... and Everything... I'm not sure. However, in these blog posts you'll find my attempts to be a heckler (or techler haha) by questioning, challenging, and trying to understand what the tech is happening with today's biggest ideas.
Let's see where this takes us! :P

Home: About

PROJECTS

file-20180509-34021-1t9q8r0.jpg

PROJECT I

To Beep or Not to Beep: Why Understanding Human Consciousness Means Better Robots

Currently, the information processing, logical side of the human mind is the part that is mainly understood and used to make helpful computers, but more complexities exist in the subconscious level that prevent technology from becoming “human.” However, artificial intelligence has come a long way towards replicating creativity, analysis, and intelligence and even offers humans an opportunity to improve their lives by changing or uploading their brains. With all these technological advances, what will it take to have a future where robots and people both have consciousness? And, if this happens, how can these two groups best function together to maximize prosperity?

Screen Shot 2018-11-28 at 12.44.56 PM.pn

PROJECT II

Slidedeck on Technology and Philanthropy

A presentation of research related to corporate philanthropy, psychological ideas such as argumentative theory, and why advancements in technology have great potential to damage society. Project III is a much more developed version of this project.

Screen Shot 2018-11-28 at 12.55.50 PM.pn

PROJECT III

The Social Good Revolution: How Corporate Responsibility can Enable Technological Innovation and Beneficially Impact Society

Abstract: In this day and age, technology is affecting people in ways it never has before. Artificial intelligence is replacing human decision making in key areas, the sensational ways in which companies use technology incur short term gains while corrupting entire populations, and unmoderated sides of the internet decrease participant responsibility and hateful groups to reach others under the guise of anonymity. All these advances pose new and concerning ethical and moral questions we’ve never seen before. The decision to build technology with the benefit of society in mind may change from being the “right” thing to being the only way technologists, companies, and the people of the world can prevent self destruction. This social good revolution is on the horizon because companies like Uber and Lyft are becoming more competitive in the realm of total societal impact. Also, companies like Pinterest and LinkedIn are realizing where their algorithms fall short of serving the needs of their customers, while others like Google are hiring teams of ethicists and setting goals for themselves regarding their impact on the world. When technology companies and their engineers are aware of the unintended consequences of their new technology, they can build better products that make everyone better off and keep the company sustained in the long term. Mission-driven development is taking off because the future of the world is increasingly at stake. However, making an impact requires more than just intention. Argumentative theory explains that individuals must interact and compare ideas in order to dismantle their confirmation bias. People are starting to care more about working for companies that make ethical decisions. They can contribute by questioning corporate intentions, expressing their opinions, and feeling confident in the social impact of the products they build. Companies can also encourage this kind of culture among their ranks by aiming for diversity of thought while hiring and being open with their decision making. These efforts incentivize engineers to work for companies and make the technology they build better satisfy the mission.
Keywords: Technology, Corporate Philanthropy, Artificial Intelligence, Ethics of Technology, Mission Driven Development, Human Decision, Argumentative Theory, Confirmation Bias, Free Speech, Total Societal Impact, Corporate Social Responsibility, Pinterest, LinkedIn, Google, Slack Uber, Lyft, Algorithmic Bias, Diversity and Inclusion, Hiring Practices

Home: Projects
Home: Blog2
Search
  • Writer's pictureTechler

Rough Middle 1

Comments: I realized while writing this that I still don't really have a good idea of where my structure is going. I thought I would be fine covering a lot and just jumping around between topics, but I ended up being pretty unsatisfied with how little everything connected together in some cases. Trying to go from psychology ideas to corporate philanthropy to machine learning then to unconscious bias is just way too much and I can't wrap my brain around it. I think I'm going to have to sacrifice some of the research I've done because if I actually fully developed every idea I have here this essay would never end. I've realized I'm more interested in what I discovered most recently in the AI for social good event: how algorithms work and they ways they can be used for social good and trained to be less biased but also the ultimate question of whether it is the responsibility of tech companies and the way we define what social good is as a society. The idea of how companies have economic incentive thing seems a little less important. Also, a lot of the points about having diversity of thought in the tech industry and giving people the opportunity to talk about and fix their issues are things I really want to talk about, but the examples with hiring and diversity initiatives aren't really what I wanted to focus on. Technology actually fulfilling a need is an important concept, but I think I need to introduce that Idea by first discussing the ideas from the speaker panel and then bringing in the argumentative theory stuff and then linked in and slack stuff.



Coming back from a day in San Francisco and too lazy to figure out the bus system, my friends and I opted for the cheapest, easiest form of transportation. We opened up the Uber app and waited a short 5 minutes for our ride to come. Looking at the driver’s profile, we noticed that this man had driven somewhere along the lines of 20,000 rides for Uber. We were astonished that this number was even possible. We asked him about it, and learned that he had been driving for 3 years full time. It took some probing questions to really get his thoughts about Uber out, but our curiousness and empathy for his situation got him to open up. He felt trapped: He had started working for Uber because of the promise that it would help him get ahead of the insane rent in San Francisco, but he ended up stuck with car payments and barely able to make payment on his rent and help out his family. He told us about how he’s hungry and that Uber takes a really large cut (around 60%) of his earnings, but it varies a lot and he never knows how much he is going to be paid. We were especially touched when I asked him what he thought was going on in the minds of Uber executives. He said: They don’t really see us as people just because we’re at the bottom of the ladder. They’re blood-sucking vampires focussed on themselves and money, and they don’t even realize who they’re hurting and how much.

Humans have unconscious bias towards their own ideas and views of the world as well as towards people like them. But, it is when people are exposed to new ideas that they actually understand how their skills, connections, and influence really affect the world. Bias is hard to detect and much more difficult, perhaps impossible to counteract. This is due to the fact that people live in their own bubbles. It’s so much easier to think that the way one lives their life is good or has no effect on the world, but in reality, does everything that every person does matter? Since people have such vast networks nowadays and technology has such a significant, difficult to understand impact on people, actions matter more than ever. But, people are also increasingly polarized. Having this social media and bubble of the tech industry means that the people with the greatest influence are surrounded by others like them and are worlds aways from the majority of people: namely those that are negatively affected by the “solutions” they are building (as confirmed by our uber driver). Unconscious bias among the ranks of technologists is a significant factor in why tech seems to create more problems than it solves.

Humans do not always perceive the world rationally, even when they believe they are doing so. People need to be exposed to differing viewpoints to counteract their own confirmation bias, propensity towards sunken-cost fallacy, and ego. The paper "Why Do Humans Reason? Arguments for an Argumentative Theory" by Hugo Mercier and Dan Sperber describes an understanding of this detrimental behavior. The authors explain that "reasoning falls quite short of reliably delivering rational beliefs and rational decisions…[it] can lead to poor outcomes, not because humans are bad at it, but because they systematically strive for arguments that justify their beliefs or their actions” (Mercier). When people have an explanation for something in their head, they have difficulty accepting information that goes against that idea. Cognitive dissonance, which occurs when an individual holds two opposing view at once, is uncomfortable. So, people often avoid opposing sides of arguments. Everyone has their own view of what philanthropy looks like in the world, and tech companies, including startups and nonprofits, have their own ideas of how their mission and their products fit into that view. How can technologists be aware of the drawbacks and unintended consequences of their technology if they only view it in a positive light?

The US has had a long history of technology leading to the betterment of society despite short-term drawbacks. The industrial revolution brought advances unlike the world had ever seen before in steel, machinery, trains, etc. bla bla. Technology in general leads to a better quality of life for people because it frees up time and reduces unnecessary human labour. During that time, philanthropists in these big corporations took on the responsibility of serving the needs of individuals that technology couldn’t necessarily meet. Philanthropy was a separate endeavor from business. Nowadays, however, philanthropy is much more intertwined with business itself.

Businesses have shifted focus from shareholders to the effects they have on society and how they are viewed by the people they serve. A Ted Talk titled “The Business Benefits of Doing Good” given by Wendy Woods discussed ideas businesses use to measure their impact and progress, as well as how those tools can be rethought. Companies are starting to focus on TSI (total societal impact) and CSR (corporate social responsibility) as opposed to TSR (total shareholder returns) because TSR does not determine a company’s success in the long run. In a study, she “the oil and gas companies that are performing most strongly on TSI see a 19 percent premium on their valuation...When they do really well on things like minimizing the impact of their company on the environment and water, and when they have very strong occupational health and safety programs” (Woods). Companies that prioritize the long term over short term and recognize that quarterly-driven decisions lack foresight for the far off future are destined to fail. She says “Thinking about business benefits of doing good makes people feel selfish,” but, “making money ethically, sustainable is something to be proud about.” From a business standpoint, if people can do good for society profitable, then why wouldn’t they. The difficult part is figuring out where exactly a business’s view of what society wants is different from what society actually wants and will pay for. This goes back to argumentative theory: the bias that technology-based businesses have towards their own ideas can prevent them from building technology that actually serves needs and correcting for faults in their products that harm society.

Being empathetic towards customers through various corporate philanthropy programs is a way for companies to make a name for themselves. For example,

an article in ABC news titled “Ride Share Companies Embrace Election Frenzy” by Cathy Bussewitz details how Lyft and Uber have made an effort to do their part in getting people out to vote. Lyft is “working with Voto Latino … as well as nonprofit organizations that help blind people and student veterans to distribute discount codes and identify where free rides are needed,” and Uber “is offering $10 off rides to the polls across the country and added a feature in its app that helps customers find their polling stations by typing in a home address” (Bussewits). These companies identified that not having transportation was a reason people weren’t voting and used their platform and influence to get more people voting. The underlying reason for these efforts on the part of both companies may be that they are using social work to differentiate themselves from the competition. These days because the unethical practices of companies are exposed when their stories are heard through social media, acting in the best interest of society is increasingly important.

Additionally, internal hiring practices and company culture can bring about social good both because people want to work for companies that hire ethically and want to hire employees that are really going to help them accomplish their goals. For Slack, one of their interpretations of social good in society is hiring people from diverse backgrounds so that the hiring process is fair and the company has a better understanding of their products because of the diversity of voices. An article in The Atlantic titled “How Slack Got Ahead in Diversity” by Jessica Nordell focuses on the intense focus Slack has on geninueinly having diversity in their workplace and having those individuals feel accepted. “At Slack, the absence of a single diversity leader seems to signal that diversity and inclusion aren’t standalone missions, to be shunted off to a designated specialist, but are rather intertwined with the company's overall strategy” (Nordell). Social good should be a company wide mission, not an afterthought. By involving their employees in diversity conversations everyone takes responsibility for how the company actually achieves that mission. In hiring, the company focuses on “interpersonal phenomena like stereotype threat, in which people from stigmatized groups spend mental energy grappling with negative stereotypes about those groups” as well as interviewers “inadvertently favor[ing] candidates who resemble themselves, and if criteria for a job are ambiguous, interviewers may mentally rejigger those criteria to fit whatever a favored candidate has” (Nordell). They recognize that some people have privilege in certain situations and others don’t. When potential hirees see that Slack is empathetic towards the issues they face, it is easier to feel like included in the company. having a diverse workforce is a competitive advantage that drives productivity and profits as companies sell their products and services to a broad population. The Institute for PR published a study saying that “Neary half of American Millennials say a Diverse and Inclusive Workplace is an Important Factor in a Job Search.” People clearly care about having diversity, but diversity should not just be a numbers game. It should be a genuine, thoughtful effort to bring new ideas into the workplace in order to understand issues groups face and solve problems better.

How do non-profits and philanthropists ensure they are actually fulfilling a need? The tech industry is huge, and people are turning their ideas into companies and shipping out products more rapidly than ever before. According to an article by Erin Griffith in Fortune titled “The Ugly Unethical Underside of Silicon Valley,” “73 Billion dollars in venture capital was invested in U.S. startups in 2016” (), and according to CNN, “Apple, Alphabet, Microsoft, Amazon and Facebook are collectively worth nearly $3.3 trillion.” Silicon Valley start-ups are becoming so common and such a quick way for success, that people are often building businesses without putting significant thought into how their business will function in society. This rule breaking culture is the norm in silicon valley, and sometimes the decisions start-ups make are outright unethical. For example, “Skully, the failed maker of smart motorcycle helmets, [are] being sued for fraudulent bookkeeping,” “Faraday Future and Hyperloop One, ambitious, well-funded companies [are] now tainted by lawsuits and accusations of, respectively, overhype and of mismanagement,” and “there’s less transparency as companies stay private longer (174 private companies are each worth $1 billion or more), and there’s an endless supply of legal gray areas to exploit as technology invades every sector” Basically these companies are faking it to gain a competitive edge and hurting society in the process. This article is a telling description of what exactly the culture is like in Silicon Valley and why some of these ideas are very harmful. Start-ups seem productive and beneficial to society with their quickly implemented business plans and products that ship as fast as possible. These companies have to cut corners to generate profit because they’re not necessarily beneficial to people in the long term, and people don’t want to support something that impacts them in a negative way.

A podcast by TED Radio Hour titled “Unintended Consequences” and hosted by Guy Raz discusses the dark ways people can use technology that we often do not think about and the potentially destructive ways it could be used in the future.

This podcast was all about the unintended consequences of technology such as the dark side of youtube promoting terrorism and AI making communism efficient. Guest Speaker James Bridle who did a TED talk titled “What Do Kids' Videos on YouTube Reveal About the Internet's Dark Side?” explains that people are making really long, sensational videos of people opening easter eggs and pirating content from kids’ TV shows on Youtube in order to target children for views and ad revenue. Even more unsettling, some videos with titles that seem kid-friendly are actually really dark and unsettling and can scar kids given the freedom to browse the sight. On the surface, Youtube is simply a site for sharing videos and finding videos based on one’s interests and search terms so they can learn about the world. But, in practice, this technology gives rise to ethical issues when people with harmful intentions get access to it. Bridle comments that Youtube has “decided to optimize for reactions and sensation over other forms of kind of verifying knowledge.” As a company, Youtube’s main focus is to get people on the site so they can get money through ads. But, considering how detrimental some of the videos shown are, society may want to rethink the value of entertainment. People are looking to be entertained, but are they sacrificing truth in the process? Should Youtube really be responsible for this thought? The tech we build is an experiment that we can then use the results of to change the tech and reflect on the faults within society.

Additionally, another guest speaker Yasmin Green who did a TED talk titled “How Did The Internet Become A Platform For Hate Groups?” discusses her work trying to counteract terrorism and giving people access to information that can help them make informed decisions.

Thought the internet was going to be some sort of perfect utopia. They envisioned the internet would “connect people to information [and] to each other,” for it was “going to transform democracies” and “empower populations” (Green). But, the intentions of technologists usually stand to be corrected. The terrorist group ISIS took off because they had a lot of influence on the internet and radicalized people who had questions about what being a terrorist was like which were only answered by ISIS’s biased views. However, there is no easy fix to the problem - we can’t just push everything we don’t like to the so-called dark web. Green says “it's easy and dangerous to say, well, there are good people and bad people because...the prescription is really punitive technologies or policies” Censoring people that disagree with the way society or internet companies view the direction the world should go isn’t as easy as simple as it sounds. Giving individuals access to unbiased, helpful information is a difficult issue. It’s really important in general to think about the potential use of technology by hate groups and people with bad intentions And, anticipating and correcting for the actions of these people and constantly rethinking technology is the only thing society can really do. Spotting problems early and working for the good of society as a whole is incredibly important

Artificial Intelligence is an application of technology that hold great promise while posing difficult to understand problems. In her Ted Talk, Zeynep Tufekci explains that it is difficult to tell how AI makes decisions, which is especially alarming when those decisions greatly affect people’s lives. She says “we cannot outsource our moral responsibilities to machines.” “We need to cultivate algorithm suspicion, scrutiny and investigation” and ensure “accountability, auditing and meaningful transparency” of the algorithm (Tufekci). AI can be thought of as a “black box.” We can tell what comes in and what comes out, but we don’t know much about what happens in between. Despite this, humans should be responsible for their creations. Accepting that AI won’t solve all the problems but also that the problem isn’t necessarily the AI, it could be society itself is important. For example, there is an issue with search engines and websites that rely on user preferences to create echo-chambers of opinion and be biased towards certain groups. An article by Arielle Pardes titled “Pinterest Wants to Diversity Your Search Results” in Wired details how Pinterest is implementing a simple feature for people to select their skin tone in order to get more helpful search results. She explains that “it’s the very beginning of a longer journey toward bringing greater diversity to Pinterest’s platform, through showing different complexions, body shapes, disabilities, and ages. Those are complex problems,” but, “the whole point of adding visual filters was to remove the barriers to content discovery” (Pardes). This tool obviously isn’t perfect since it relies on machine learning, but this is an example of a very controlled, small scale approach to solving a diversity problem. Brute force computing and more data isn’t the answer. This issue is going to be solved by people who really understand their product or resource and audience and are trying to provide the best experience to those people. Pinterest recognizes limitations of their technology and is careful with their search for a solution in order to beneficially impact their users as much as possible.

In an interview for Wired titled “How Technology Accentuates Tribalism,” CEO of LinkedIn Jeff Weiner describes how his company is trying to understand the effects they have on minorities due to the pre-existing networks of most of their patrons. it feels like every week there's another headline that is talking about how some of this stuff is going in the wrong direction. Linkedin’s mission is to allow people to connect with others and find jobs. But in order to do so, they need to understand the unexpected impact of their technology. Problems occur when when the platform provides “more and more opportunity for those that went to the right schools, worked at the right companies, and already have the right networks” (Weiner). The world can be very unfair at times, when people who already have opportunity seem to be given more and more regardless of their actual merit. It doesn’t seem fair that certain people should have to work so much harder than others to get to the same place, and technology makes it even harder in some cases LinkedIn implemented a button that allows people to ask for referrals from others in their network, but they realized that a lot of very capable people without networks couldn’t use the button to their advantage as well. So, they set up the Career Advice Hub so people could ask questions and find mentors. This is bridging the gap between people that have resources and people who don’t. LinkedIn is trying to recognize the needs of the underdog and

also help companies find more diverse candidates to hire. Connecting the world, spreading ideas, and giving minorities opportunities to have a voice has shifted into the hands of technology companies, so it is vital that they take responsibility for their role.


Stuff I still need to develop:

During an event titled “AI for Social Good” hosted by my Blueprint, I facilitated a speaker panel with accomplished individuals from multiple fields that are focused on using AI for the betterment of society.

Christine Robson - a product manager for Google's machine learning teams

Healthcare

detecting cancer by looking at lots of images

Recommended readmittance to ER based on many factors (not just a black box diagnosis) that doctors can look into

Earthquake aftershocks

Whales

Underwater audio turned into image to be classified

Goal is to track the whales

Josh Kroll - postdoc specializing in governance and computer systems at the U.C. Berkeley School of Information

Cognitive bias is built into our brains and is difficult to observe

Even if algorithms don’t explicitly know your race, they can get a pretty good idea from zip codes and other factors

Just not using race isn’t the silver bullet

ML presents a problem when its results are allocative or representational

People are affected by them basically

In embedding systems, we represent words as vectors

This tends to create more bias

Graph with certain words being related to male and female

Dark skinned faces poorly identified

Hp deployed face idea and ppl realized (2009)

Google photos

Suggests tags for photos

Suggested gorillas for two dark skinned individuals

A mistake that has some sort of cultural baggage associated with it

Gap between data used to design system and data we are applying it to

Ilya Kirnos - CTO and founding partner of SignalFire, a venture capital firm with a focus on technology and AI investments

As engineers, they have a better understanding of what technology is a fad and what will actually take off.

They have large endowments and invest in 10 year increments

Looking more at long term

There are different types of AI

Low stakes - underhyped

Google ads, search (he used to work on)

High stakes - overhyped

Cars, medicine

Could have bad consequences if they mess up

Need to be proven on more low stakes scales

Invested in self driving forklifts

10,000 forklift injuries per year

Short run - jobs are lost, industry really affected

Long run - less injuries and lives lost!

They actually call up warehouses and ask if there’s really a need for the tech

Ensure impact

There is a sweet spot for autonomy that is actually going to be accepted into the world

Their goal is to back transformational companies

Not necessarily to pursue social good, but they do consider externalities

They would feel bad hurting the world and their investors wouldn’t like it

Low to medium stakes stuff is here to stay

Q&A panel:

How to get involved?

Lots of prediction tools, human context ethics, machine learning courses

Bias that exists? Why does it keep happening? What do we do to make sure it doesn’t?

No good solution

“Painful to see that come out... they knew there was a risk for bias and they worked really hard to debias the model… they worry about releasing new features that they are worried may have errored edge cases”

Look at edge cases!

“Thinking really hard about what they want to be doing and how we want to get there”

“Even very small error percentages are going to show up some large number of times”

Diverse voices methodology - community governance of investment decisions - people should weigh in on social impacts of technology

Lot’s of interventions like this have been proposed, but which ones work?

What kind of regulations should be put on AI, if any?

“Seems tough to craft a regulation that surgically removes the cases you don’t want”

Not specific to AI problems, these are problems for any advanced technology decisions

Google pulls back from regulation

Policy makers don’t understand tech and there’s too many hairy things to deal with

“Unwind to how technology interacts with society and the ways in which our lives are impacted by technology”

“That’s when I think it’s important for society and government as a proxy for society to take a look at these things and see how they interact with our lives and make decisions that might be difficult”

Privacy rules depend on what you’re doing with data that are enforced by different parts of the government

Need to think about goals

How do you determine bias vs. underlying feature?

“Is the problem in your analysis or your data collection or is it an unfairness in the world? The world is not a terribly fair place and so maybe we shouldn’t hold technologies to some very high standards of fairness”

But, we can think really hard about what we want to do and what that standard should be.

You often end up trading off between

Mandate in california to assess people for their risk of reoffense and

Determine if they can have money-bail or not

A study done by pro-publica on COMPAS

Gives you a score from 1-10

“People of colour are arrested more often so they naturally have a higher risk of being rearrested”

Pro-publica says they should equalize false-positive rates

Then numbers would mean something different with different races

What is the definition of social good companies are going for and are they defining it correctly?

Presentations lack specific normative vision for how AI should fit in the world

First 2 - specific contexts where AI is self-evidnetly good

3 - don’t do evil

4 - ethically good things are reduced to economic model if we factor in externalities

Companies have idea/standards for AI being good

Christine - companies are trying but... “I hope society isn’t hoping that google is going to make the definition of what makes society good. That’s not the place of any single corporation. That’s the place of society”

They spend a lot of time introspectively to decide what their goals are - google principles (using tech should be socially beneficial)

Have ethicists to determine what is socially good

Look at what is bad as well - things they don’t want to do

“Who’s going to decide what the goal of AI in society is? I don’t know maybe you should?”

You define the terms of the benefits and costs function

“I think you’re asking a political question and not a technology question”

Not all political systems are going to resolve the tension of some people wanting tech to benefit them and other people wanted tech to benefit them in another way

Want different things with compass algorithm depending on if convicted or if judge

Rubin Bins - machine learning fairness lessons from political philosophy

Focus on the system not the technical artifacts

Maybe the problem is that we are arresting too many black people

Maybe the money bail system is unethical

Cool! Tech isn’t going to solve the underlying issues in society, but it’s not necessarily the problem. - Focus on the problems

4 views0 comments

Recent Posts

See All
bottom of page