top of page

WHAT THE TECH

How do we find meaning among the machines?

Hey there, I'm a computer science undergrad at Berkeley. Thinking about my opportunities for using my CS skills in the future, I find myself asking a lot of questions. How do I do work that is actually meaningful and helpful to people? And, how can technology bridge barriers between people and scale bright ideas?
This futuristic world we live in can be difficult to understand, but it is important to ask these key questions and focus on impact. This blog is called What the Tech because, frankly, What the Tech is Tech... and Life... and Everything... I'm not sure. However, in these blog posts you'll find my attempts to be a heckler (or techler haha) by questioning, challenging, and trying to understand what the tech is happening with today's biggest ideas.
Let's see where this takes us! :P

Home: About

PROJECTS

file-20180509-34021-1t9q8r0.jpg

PROJECT I

To Beep or Not to Beep: Why Understanding Human Consciousness Means Better Robots

Currently, the information processing, logical side of the human mind is the part that is mainly understood and used to make helpful computers, but more complexities exist in the subconscious level that prevent technology from becoming “human.” However, artificial intelligence has come a long way towards replicating creativity, analysis, and intelligence and even offers humans an opportunity to improve their lives by changing or uploading their brains. With all these technological advances, what will it take to have a future where robots and people both have consciousness? And, if this happens, how can these two groups best function together to maximize prosperity?

Screen Shot 2018-11-28 at 12.44.56 PM.pn

PROJECT II

Slidedeck on Technology and Philanthropy

A presentation of research related to corporate philanthropy, psychological ideas such as argumentative theory, and why advancements in technology have great potential to damage society. Project III is a much more developed version of this project.

Screen Shot 2018-11-28 at 12.55.50 PM.pn

PROJECT III

The Social Good Revolution: How Corporate Responsibility can Enable Technological Innovation and Beneficially Impact Society

Abstract: In this day and age, technology is affecting people in ways it never has before. Artificial intelligence is replacing human decision making in key areas, the sensational ways in which companies use technology incur short term gains while corrupting entire populations, and unmoderated sides of the internet decrease participant responsibility and hateful groups to reach others under the guise of anonymity. All these advances pose new and concerning ethical and moral questions we’ve never seen before. The decision to build technology with the benefit of society in mind may change from being the “right” thing to being the only way technologists, companies, and the people of the world can prevent self destruction. This social good revolution is on the horizon because companies like Uber and Lyft are becoming more competitive in the realm of total societal impact. Also, companies like Pinterest and LinkedIn are realizing where their algorithms fall short of serving the needs of their customers, while others like Google are hiring teams of ethicists and setting goals for themselves regarding their impact on the world. When technology companies and their engineers are aware of the unintended consequences of their new technology, they can build better products that make everyone better off and keep the company sustained in the long term. Mission-driven development is taking off because the future of the world is increasingly at stake. However, making an impact requires more than just intention. Argumentative theory explains that individuals must interact and compare ideas in order to dismantle their confirmation bias. People are starting to care more about working for companies that make ethical decisions. They can contribute by questioning corporate intentions, expressing their opinions, and feeling confident in the social impact of the products they build. Companies can also encourage this kind of culture among their ranks by aiming for diversity of thought while hiring and being open with their decision making. These efforts incentivize engineers to work for companies and make the technology they build better satisfy the mission.
Keywords: Technology, Corporate Philanthropy, Artificial Intelligence, Ethics of Technology, Mission Driven Development, Human Decision, Argumentative Theory, Confirmation Bias, Free Speech, Total Societal Impact, Corporate Social Responsibility, Pinterest, LinkedIn, Google, Slack Uber, Lyft, Algorithmic Bias, Diversity and Inclusion, Hiring Practices

Home: Projects
Home: Blog2
Search
  • Writer's pictureTechler

Annotated Bibliography

Annotated Bibliography

In her article titled “Truth, Lies, and the Internet” in The Atlantic, author Rebecca Rosen discusses how human psychology tends to lead groups towards polarization and misinformation. Psychologists argue that “humans are notoriously poor at reasoning as it is conventionally understood, predictably falling into known traps, such as the confirmation bias (the tendency to absorb information that supports what one already thinks). Human reason “exists to structure and promote discourse” and is “better at spotting the flaws in someone else's argument than its own” which means “groups or pairs can do much better on a variety of tests than when flying solo” (Rosen). Before going on the internet to find facts about current events and ideas, people often have preconceived notions about the kinds of facts they want to find. If someone finds a “fact” on the internet that agrees with their side of the story, it is very difficult for them to question it. Going back to the ideas from the previous discussion, when humans are vessels for memes, they spread and sustain that meme even at a detriment to themselves. The meme is like a virus since people do not want to be cast out of groups of similar-minded people for questioning ideas. We look to fact checking sites to refute, not to confirm: we choose our facts and personal narrative based on what we already believe unless we do some major work to disprove ourselves or talk to someone who will. Additionally, “We continue to believe that the truth will out and the facts will save us. We will have better information and make better decisions, elect better leaders, have better government, be better off.” but “The bad news is that accuracy only takes us so far” (Rosen). Clearly, just having fact-checkers available isn’t enough to fully educate people on the diversity of opinions in the world. And, since the internet is the platform on which most people consume information, reform should start there. However, it is difficult to understand exactly what the problem is and how exactly to fix it. Finally, a “piece can be just as wrongheaded once the numbers are correct. View-from-nowhere journalism or he-said-she-said reporting can be entirely accurate, but do little to help explain an issue or an idea, to say nothing of inspiring empathy or compassion” It’s not just facts, ideas need to be explained logically and coherently in order for people to make their own inferences and change their beliefs.

This article was helpful because it introduced me to the discussion of confirmation bias and how what is factual and accurate is not necessarily what people are going to perceive when hearing information from certain sources. This comes into play in the business world because people often don’t want to prove themselves wrong, and they end up making technology that is not impactful because of this.

There is an issue of search engines and websites that rely on user preferences to create

echo-chambers of opinion and be biased towards certain groups. An article by Arielle Pardes titled “Pinterest Wants to Diversity Your Search Results” in Wired details how Pinterest is implementing a simple feature for people to select their skin tone in order to get more helpful search results. She explains that “it’s the very beginning of a longer journey toward bringing greater diversity to Pinterest’s platform, through showing different complexions, body shapes, disabilities, and ages. Those are complex problems,” but, “the whole point of adding visual filters was to remove the barriers to content discovery” (Pardes). This tool obviously isn’t perfect since it relies on machine learning, but this is an example of a very controlled, small scale approach to solving a diversity problem. The problem isn’t going to be solved with brute force computing, it’s going to be solved by people who really understand their product or resource and audience and are trying to provide the best experience to those people.

This is my example of companies thinking they will solve a solution but actually won’t be able to. I’m using as evidence after I discuss the idea of how we can’t use ML as a black box - by making algorithms instead of answering actual questions. That’s not actually being socially good. Pinterests effort are a counterexample to the harmful black box thinking. These people are doing a lot of research, making the tool intuitive and simple, and recognize that it won’t solve the entire problem all at once.

People are trying to find solutions to problems, but detecting what is a lie and what is truth is not as simple as it seems. In her article titled “Google and Facebook Can’t Just Make Fake News Disappear,” author Danah Boyd discusses fake news and the best ways to combat it. Besides blatantly wrong information, there is “subtle content that is factually accurate, biased in presentation and framing, and encouraging folks to make dangerous conclusions that are not explicitly spelled out in the content itself” (Boyd). Additionally, computer algorithms don’t get rid of problems since people usually find a way around them. On AOL, “those who identified anorexia as a lifestyle started referring cryptically to their friend “Ana” as a coded way of talking about anorexia without trigger the censors” (Boyd). This kind of censoring limits free speech in a way that forces certain groups out of the mainstream. This can be dangerous especially if these groups share views dangerous to society because their collective reasoning is largely unknown by groups that could provide counter arguments. Ultimately, as Boyd asserts, “we have a cultural problem, one that is shaped by disconnects in values, relationships, and social fabric” and the only way to undermine it is to “develop social, technical, economic, and political structures that allow people to understand, appreciate, and bridge different viewpoints” (Boyd). No simple solution is going to solve these kinds of problems. Attempting to block people’s viewpoints is eerily controlling. The truth is something that is very hard to determine, and people subsequently aren’t able to make the most informed decisions about what to believe. As long as people are going to be finding this fake news anyway, it may be better to just have everything out in the open so people can have more choice and compare opposing views (like seeing which sources are biased and which aren’t) before deciding what opinions to hold and forming hate groups against other well-intentioned people.

This adds to the argument of how to detect what is good and what is bad by asking what

Is not so simple. We can’t just block people on the internet from having ideas that are detrimental to themselves, those that come in contact with their ideas and society: we have to actually realize what societal problems are at the basis. This source doesn’t relate as directly to my concepts, but I still found it useful since it discussed the idea of free speech and echo chambers and questioned why we can’t just block certain kinds of ideas and find truth easily. It is hard to detect what truth is as well as convey it to an audience in a way they can fully grasp.

Mark Hurlburt’s company has a bootcamp that tries to get people from different backgrounds into cs so that they can build tools that are more impactful since they have experience in different areas. They work 80/90 hrs a week for 6 months developing those skills. I thought this was an incredible idea since the big problem with entrepreneurs in silicon valley is that they have to skills to develop software but often don’t realize what’s actually going to be helpful to the world, especially in the long run. He says:“People who build software are working with very little if any context for the industries they are trying to revolutionaries they are working in.” Also, “the future of everything is too important to be entrusted to a small group of people.” It is very important that we realize this ethical dilemma and try to encourage diversity, freedom of speech, and bridging the digital divide. He proves this: “Studies with diverse management teams are 35% more likely to have financial returns above their industry mean and develop more products and innovations”

Hurlburt’s slogan: “Empathy beats engineering” sums up his TedTalk well. He is basically arguing for more people in the tech workplace that can actually build technology that helps people. This source was very persuasive and provided a fresh angle because I hadn’t really given a thought to code camps before. But, this proves to be a great way of getting people into the workforce in silicon valley that can make real change. The solution to the problem isn’t really the technology, it’s identifying the need.

I used this source for more facts on boot camps, not really for a fresh angle:

  • There’s pros and cons to these tech boot camps.

  • Code schools will graduate 22,000 students in 2017, about half as many as all accredited colleges and universities combined (despite a 200 year head start)

  • The best, most useful bootcamps become successful since it is a competitive industry

This text is all about the approach technologists can take to realize the effect their products have on the world. Why do we even need commandments? We don’t just want to adopt the “first do no harm” rule. Fixing the world is more complicated than that, and we can’t just simplify doing good to having good intentions. Many people have good intentions and then end up hurting people. We want to have direction with our research, ideas, and discoveries. The commandments are basically to swell happiness, balance costs and benefits, enact reasonable laws that limit potential damage, and enable fresh scientific insights.

This source was really valuable to me, because before reading it, I didn’t know how to argue my points except for saying: try to build tech that helps people. This proved that there are better ways of doing good than others, and there are ways to learn by example. The biggest takeaway was that some people think they’re ideas are completely sound, but they should actually work with the government and society to give them checks and balances. This is different from the more capitalist idea in silicon valley that favors lack of regulation.

This Ted Talk titled the Business Benefits of Doing Good by Wendy Woods discussed ideas businesses use to measure their impact and progress and how those tools can be rethought. TSI (total societal impact) is better than CSR (corporate social responsibility), which is the first to be cut when a company senses it is failing. TSR (total shareholder returns) is not a good focus because it does not tell the whole story. In a study, she “looked at oil and gas companies, and the oil and gas companies that are performing most strongly on TSI see a 19 percent premium on their valuation...When they do really well on things like minimizing the impact of their company on the environment and water, and when they have very strong occupational health and safety programs. And when they also add in strong employee training programs, they get a 3.4 percentage point premium on their margins” (Woods). Also, “Biopharmaceutical companies that are the strongest performers on TSI see a 12 percent premium on their valuation. And then if they're best at expanded access to medicines -- making medicines available for the people who need them -- they see a 6.7 percentage point premium on their gross margins. Consumer goods companies that perform best on total societal impact see an 11 percent valuation premium. And then if they do those smart things with their supply chain -- inclusive and responsibly sourcing their product -- they see a 4.8 percentage point premium on their gross margins” Companies need to prioritize long term over short term and recognize that consumers get really excited to buy the products. She says “Thinking about business benefits of doing good makes people feel selfish,” but, “making money ethically, sustainable is something to be proud about.” If people can do good for society profitable, then it is more sustainable in the long run.

This source was helpful because it’s easy to discuss doing what is socially good as something we should all strive for, but when it comes down to it, people are looking out for themselves at the end of the day. This source shows that regardless of the perspective of the leadership of a company, doing good for the world means good business. This source was also useful because of all the facts it mentions. I had been looking for a study like the one mentioned.

In her Ted Talk, Zeynep Tufekci explains that we train the AI when we don’t actually understand what it is thinking. She then went on to describe hiring algorithms. The investors were very excited since they thought it would solve the ethical dilemma. But really, even though humans are biased, there is great value in thinking through the ethical and moral dilemmas. Her message was that we need to realize what our black box systems are actually doing and be responsible for that. Hiring may be based on decisions in the past and biases already in place, so AI may hire a certain way because company culture is already leaning in that direction. She used key catchphrases like: “we cannot outsource our moral responsibilities to machines” (technology) and “Artificial intelligence does not give us a get out of ethics free card.” She also uses the term math-washing: “data scientist Fred Benenson calls this math-washing. We need the opposite. We need to cultivate algorithm suspicion, scrutiny and investigation. We need to make sure we have algorithmic accountability, auditing and meaningful transparency” (Tufekci). Another example was someones project to predicting depression. The AI creator didn’t want to hear about how it might not predict people who will be depressed later and refused to listen to Tufekci. Also AI plays a role in the prediction of repeat offenders for crimes, but does so in a way that favors white people.

Talking about the AI as a “black box” was something I hadn’t really thought of before and I really like the rhetoric she uses to discuss how humans should be responsible for their creations. With this source, I feel like AI is a perfect case study lens because it is an example of technology that has gone viral but really creates a lot of problems. The source was very persuasive and her speech had a great call to action.

Silicon Valley start-ups are becoming so common and such a quick way for success, that people are often building businesses without putting significant thought into how their business will function in society. This is a criticism of the tech optimism and Bay area bubble. Rule breaking culture should not be the norm unless the rules are bad, but this is the norm in silicon valley. There are many examples of companies that are fruadulent

-“Skully, the failed maker of smart motorcycle helmets, being sued for “fraudulent bookkeeping.”. using investors’ money to finance founder Mike Rothenberg’s side startup

  • Faraday Future and Hyperloop One, ambitious, well-funded companies now tainted by lawsuits and accusations of, respectively, overhype and of mismanagement.

  • $73 billion in venture capital invested in U.S. startups in 2016, compared with $45 billion at the peak of the dotcom boom, according to PitchBook), there’s less transparency as companies stay private longer (174 private companies are each worth $1 billion or more), and there’s an endless supply of legal gray areas to exploit as technology invades every sector, from fintech and med-tech to auto-tech and ed-tech.

  • Airbnb’s famous “farming” strategy (it spammed people advertising rentals on Craigslist to lure them to Airbnb). They speak breathlessly about how “T.K.”—Uber cofounder Travis Kalanick—has repeatedly ignored legal roadblocks.

Basically these companies are faking it to gain a competitive edge and hurting people in the process. This article for Fortune by Erin Griffith is a telling description of what exactly the culture is like in Silicon Valley and why some of these ideas are very harmful. Before finding this, I didn’t really have evidence as to why start-ups weren’t helping the world with their quickly implemented business plans and products that ship as fast as possible. These companies have to cut corners to generate profit because they’re not necessarily beneficial to people in the long term, and people don’t want to support something that impacts them in a negative way.

There is danger in not doing research before hand and thinking that the “tech way” is always the best way.“They have the power to change policy, but no corresponding check on that power,” said Megan Tompkins-Stange, an assistant professor of public policy at the University of Michigan. “It does subvert the democratic process.” It’s easy to have confirmation bias and think it’s helping people when it really may not be. This is almost a monopoly or tyranny on the education system. Tyrants of course think they are doing the best for people and think they know best no matter what research says. Students have less of a say perhaps since schools are so underfunded so teachers are forced to accept the help when it is given to them. Tech personalities become “venture capitalists” for schools, and just like Silicon Valley, there is pressure for these tech “advances” to be put into schools as quickly as possible without really researching their effectiveness. Also, “Four former Summit teachers said they found the system problematic. They asked that their names be withheld, saying they feared repercussions for their careers.” This is not good… they feel uncomfortable speaking freely which is the perfect way to send society on a downward spiral where we think we are doing the best for people.

In her article in the New York Times titled “The Silicon Valley Billionaires Remaking America’s Schools,” author Natasha Singer discusses CEO philanthropists from tech companies coming in and proposing tech strategies to revolutionize education. However, a lot of these programs and systems aren’t necessarily proven to work and people feel bad about speaking up against them because it’s hard to get funding for schools. In reality, money won’t solve everything. These philanthropists are really peddling “quick and easy” solutions to problems that are fundamental to human life. Education is so important and should be taken more seriously. This provides an additional case study lens for my project because education is a much bigger and more complex issue than current technology is trying to solve for. Learning must be very personalized, and education technology must actually be proven before it is widespread in classrooms. Also, the idea that parents and teachers felt the system wasn’t working but found it difficult to stand up and push back against these rich philanthropists calls into question the idea of free speech: if something isn’t working, how can awareness be spread when the creators won’t listen.


15 views0 comments

Recent Posts

See All
bottom of page