top of page

WHAT THE TECH

How do we find meaning among the machines?

Hey there, I'm a computer science undergrad at Berkeley. Thinking about my opportunities for using my CS skills in the future, I find myself asking a lot of questions. How do I do work that is actually meaningful and helpful to people? And, how can technology bridge barriers between people and scale bright ideas?
This futuristic world we live in can be difficult to understand, but it is important to ask these key questions and focus on impact. This blog is called What the Tech because, frankly, What the Tech is Tech... and Life... and Everything... I'm not sure. However, in these blog posts you'll find my attempts to be a heckler (or techler haha) by questioning, challenging, and trying to understand what the tech is happening with today's biggest ideas.
Let's see where this takes us! :P

Home: About

PROJECTS

file-20180509-34021-1t9q8r0.jpg

PROJECT I

To Beep or Not to Beep: Why Understanding Human Consciousness Means Better Robots

Currently, the information processing, logical side of the human mind is the part that is mainly understood and used to make helpful computers, but more complexities exist in the subconscious level that prevent technology from becoming “human.” However, artificial intelligence has come a long way towards replicating creativity, analysis, and intelligence and even offers humans an opportunity to improve their lives by changing or uploading their brains. With all these technological advances, what will it take to have a future where robots and people both have consciousness? And, if this happens, how can these two groups best function together to maximize prosperity?

Screen Shot 2018-11-28 at 12.44.56 PM.pn

PROJECT II

Slidedeck on Technology and Philanthropy

A presentation of research related to corporate philanthropy, psychological ideas such as argumentative theory, and why advancements in technology have great potential to damage society. Project III is a much more developed version of this project.

Screen Shot 2018-11-28 at 12.55.50 PM.pn

PROJECT III

The Social Good Revolution: How Corporate Responsibility can Enable Technological Innovation and Beneficially Impact Society

Abstract: In this day and age, technology is affecting people in ways it never has before. Artificial intelligence is replacing human decision making in key areas, the sensational ways in which companies use technology incur short term gains while corrupting entire populations, and unmoderated sides of the internet decrease participant responsibility and hateful groups to reach others under the guise of anonymity. All these advances pose new and concerning ethical and moral questions we’ve never seen before. The decision to build technology with the benefit of society in mind may change from being the “right” thing to being the only way technologists, companies, and the people of the world can prevent self destruction. This social good revolution is on the horizon because companies like Uber and Lyft are becoming more competitive in the realm of total societal impact. Also, companies like Pinterest and LinkedIn are realizing where their algorithms fall short of serving the needs of their customers, while others like Google are hiring teams of ethicists and setting goals for themselves regarding their impact on the world. When technology companies and their engineers are aware of the unintended consequences of their new technology, they can build better products that make everyone better off and keep the company sustained in the long term. Mission-driven development is taking off because the future of the world is increasingly at stake. However, making an impact requires more than just intention. Argumentative theory explains that individuals must interact and compare ideas in order to dismantle their confirmation bias. People are starting to care more about working for companies that make ethical decisions. They can contribute by questioning corporate intentions, expressing their opinions, and feeling confident in the social impact of the products they build. Companies can also encourage this kind of culture among their ranks by aiming for diversity of thought while hiring and being open with their decision making. These efforts incentivize engineers to work for companies and make the technology they build better satisfy the mission.
Keywords: Technology, Corporate Philanthropy, Artificial Intelligence, Ethics of Technology, Mission Driven Development, Human Decision, Argumentative Theory, Confirmation Bias, Free Speech, Total Societal Impact, Corporate Social Responsibility, Pinterest, LinkedIn, Google, Slack Uber, Lyft, Algorithmic Bias, Diversity and Inclusion, Hiring Practices

Home: Projects
Home: Blog2
Search
  • Writer's pictureTechler

Project III The Social Good Revolution

*Note that this project is also linked as a document (with much better formatting) on the Project III page*




The Social Good Revolution: How Corporate Responsibility can Enable Technological Innovation and Beneficially Impact Society

One day, we may live in a dystopian world where artificial intelligence predicts our every move, robots roam the streets in search of insurgents, and evil masterminds leveredge data and technology to tyrannize those under their control. As our capability and knowledge increases so does the possibility that life on Earth will become eerily similar to science fiction movie… The future is terrifying. Due to the destructive potential of technological advancements, companies can achieve long-term success by encouraging collaboration among employees and responsibility for unintended consequences. By taking tangible steps to make products that are beneficial for people, workers find more meaning in their work, and consumers are more likely to value the company. These days, the development of technology, particularly AI, the internet, and social media, has potential dark sides that lead to ethical dilemmas about its use. Since the corporations that build these products have such an impact, there are plenty of reasons why they should be responding to these moral questions and focusing their development around specific missions. Developing tech for the benefit of society is difficult, requiring small, controlled steps, focus on desired outcomes, and constant reiteration. The way to tangible benefit society is to leverage the power of argument and people’s desire to make meaningful contributions to their communities. The future is terrifying, but focus, collaboration and diversity provide a path towards innovation and success for the benefit of all.

Why the Growing Influence of Technology Makes Responsibility for its Dark Side Necessary

As shown by the increasingly complex ethical and political questions recent technology has brought about, the programs and devices engineers build have a greater potential to harm society and accentuate chaos. This leads into the question of what is means to build technology with the good of society in mind and why doing so is so vital to the future. In the past, successful technology has usually led to better living conditions even if there were short term drawbacks. Back in the industrial revolution, no one had to ask the question of whether or not advances in steel, oil and electricity production would ultimately benefit people. They simply helped everyone because they freed up time and reduced unnecessary human labour. Issues in society such as class divides, poverty, and corruption didn’t exist because of the technology itself. They arose from issues with wealth, power, and politics. During that time, select philanthropists who profited on the success of their corporations took on the responsibility of serving the needs of individuals that technology couldn’t necessarily meet. This was a separate endeavor from business. Nowadays, however, philanthropy is increasingly intertwined with corporate structures themselves because the technology they contribute to has a less understandable, possibly greater effect on how individuals interact with the world and each other.

Artificial Intelligence is an application of technology that holds great promise while posing difficult to understand problems. In her Ted Talk, titled “Machine Intelligence Makes Human Morals More Important,” Zeynep Tufekci explains that it is difficult to tell how AI makes decisions. This is especially alarming when those decisions greatly affect people’s lives. She says, “we cannot outsource our moral responsibilities to machines.” “We need to cultivate algorithmic suspicion, scrutiny and investigation” and ensure “accountability, auditing and meaningful transparency” for its applications (Tufekci). AI can be thought of as a “black box” of sorts. We can determine what goes in and what comes out, but we know little about what happens in between. Humans could simply leave all decisions to a machine, but what happens when its idea of the “right” decision is different than a human’s? Even as creations get smarter, there is still a need for beings with consciousness to be responsible for them. Solving moral problems usually requires more than just math and computer processing, so the qualifications for what AI can actually be responsible for are unclear.

Further reading: In light of google AI developments, the Future of Computing Academy calls on researchers to discuss downsides of their technology and algorithms.

When AI is used to predict people’s futures, the results pose especially problematic questions about the morality of its role. During an event titled “AI for Social Good” hosted by a club I’m involved with at Berkeley called Blueprint, I introduced and facilitated a question and answer panel. The speakers were accomplished individuals from multiple fields that are focused on using AI for the betterment of society. Josh Kroll is a postdoc specializing in governance and computer systems at the U.C. Berkeley School of Information. In addition to talking about biased face detection and the importance of training data on edge cases, Kroll discussed issues with AI allocating resources to groups or determining how they are represented in the real world. He explains that an algorithm called COMPAS is currently used to assess people’s risk of criminal re-offence on a scale of 1-10, but a study done by ProPublica found the algorithm to be biased because “people of colour are arrested more often so they naturally have a higher risk of being rearrested” (Kroll). On one hand, this is just an algorithm that compares potential criminals with others before them and determines their score based on statistics. However, the numbers mean different things for different races in this case, and the AI’s decisions can deeply harm people’s lives. Josh goes on to say: “Is the problem in your analysis or your data collection or is it an unfairness in the world? The world is not a terribly fair place and so maybe we shouldn’t hold technologies to some very high standards of fairness” (Kroll). COMPAS’s algorithm is supposed to be fair since it is purely based on data, not human cognitive bias, but people disagree with its judgements. Technology requires a better definition of what fairness is, or individuals feel violated. And, finding the edge cases and training data in general requires machines to understand human perception of the world. What decisions does AI make incorrectly and how can better ones be made?

Further Reading: Josh’s company, Rocky Coast Research, helps “clients balance business interests with responsibility and user trust;” applying some concepts discussed in the second section.

The impact of another form of technology, the internet, also reaches far beyond what it was intended to do. A podcast by TED Radio Hour titled “Unintended Consequences” hosted by Guy Raz discusses the dark and destructive ways people can use technology that technologists often fail to foresee. Guest speaker Yasmin Green, who did a TED talk titled “How Did The Internet Become A Platform For Hate Groups?,” discusses her work trying to counteract terrorism and giving people access to information that can help them make informed decisions. Originally, teams she worked on envisioned the internet would “connect people to information [and] to each other,” for it was “going to transform democracies” and “empower populations” (Green). It’s natural to have an optimistic take on new, exciting technology. However, the intentions of technologists do not always pan out. The terrorist group ISIS took off because of their internet presence, radicalizing individuals who had limited sources of information about the realities of terrorism. Green says “it's easy and dangerous to say, well, there are good people and bad people because...the prescription is really punitive technologies or policies” (Green). Some believe the solution is to push certain people to the so-called dark web or selectively remove their internet presence. However, there are technical limitations to detecting these people and ethical questions about what should and should not be censored. Restricting people that disagree with the “correct” way society or internet companies view the world isn’t as easy as simple as it sounds. So, it’s important in general to think about the potential use of technology by hate groups and people with bad intentions. The dream of the internet connecting people is far different from the real, moral questions about freedom of expression it brings up. Left unchecked, hate groups on the internet could polarize more and more, eventually consuming the online world with detrimental ideas. This is an issue that concerns all users and something people are very invested in.

How Corporate Social Responsibility is Driven Through Market Competition, Baby steps, and Mission-Centric Development

Businesses have shifted focus from shareholders to the effects they have on society and how they are viewed by the people they serve. A Ted Talk titled “The Business Benefits of Doing Good” given by Wendy Woods discussed ideas companies use to measure their impact and progress, as well as how those tools can be rethought. Intelligent corporate leaders are starting to focus on TSI (total societal impact) and CSR (corporate social responsibility) as opposed to TSR (total shareholder returns) because TSR does not determine a company’s success in the long run. In a study, she says “the oil and gas companies that are performing most strongly on TSI see a 19 percent premium on their valuation...When they do really well on things like minimizing the impact of their company on the environment and water, and when they have very strong occupational health and safety programs” (Woods). TSI is difficult to measure because it is subjective from person to person and changes over time. It is up to society to determine what that measurement is. Given that, companies that prioritize TSI and thinking about the long-term are more successful because customers are more driven to support them. Prioritizing the long term over the short term means recognizing that quarterly-driven decisions lack foresight for the far off future and may be destined to fail. She says “Thinking about business benefits of doing good makes people feel selfish,” but, “making money ethically, sustainable is something to be proud about.” From a business standpoint, if people can do good for society profitably, then why wouldn’t they. The difficult part is figuring out where exactly a business’s view of what society wants is different from what society actually wants and will pay for. Because of this, technology-based businesses that are orientated towards profit and quarterly-driven development may be missing out on parts of the market and success in the long term. Uber and Lyft’s efforts appear genuine, but there are also countless examples of companies being fraudulent and misrepresenting their intentions to the public.

Critical missions are often achieved by thinking outside of the box and taking small, measurable steps. In an interview for Wired titled “How Technology Accentuates Tribalism,” CEO of LinkedIn Jeff Weiner describes how his company is trying to understand the effects they have on minorities due to the way their patrons expand their networks. Linkedin’s mission is to allow people to connect with others and find jobs, but problems occur when the platform provides “more and more opportunity for those that went to the right schools, worked at the right companies, and already have the right networks” (Weiner). Noticing this issue, LinkedIn set up the Career Advice Hub so people could ask questions and find mentors. The world can be very unfair at times, when people who already have opportunity seem to be given more and more regardless of their actual merit. This is bridging the gap between people that have resources and people who don’t. LinkedIn is trying to recognise the needs of the underdog and also help companies find more diverse candidates to hire. LinkedIn wants to connect the world, and they’ve interpreted that to mean giving minorities networking opportunities as well. This responsibility is in the hands of LinkedIn, so they determine what “connection” means.

Further reading: Pinterest is tackling diversity issues with AI on a smaller, but still impactful level through carefully designed and researched technology.

The final speaker for the AI for social good event was Ilya Kirnos, the CTO and founding partner of SignalFire, a venture capital firm with a focus on technology and AI investments. His firm consists of engineers (who have a deep understanding of what makes technology successful) and invests large endowments in 10 year increments. He explained that there were two types of AI. First, underhyped low and medium stakes AI such as Google ads and search features that have few consequences if they fail. Second, overhyped high stakes AI such as self-driving cars and predictions in medicine that could have very bad consequences if they fail and should first be proven in low stakes cases. This differentiation is key to determining what is undeniably helpful to people because it saves them time or is something they are willing to pay for versus something that seems exciting but is not yet fully understood. The world simply isn’t ready for this type of technology all at once. Ilya and his company decided to focus on a certain sweet spot for self driving vehicles and invest in autonomous forklifts. These machines cause about 85 fatalities and 34,900 serious injuries each year, so in the short run jobs are lost in the industry but in the long run lives are saved and profit is made. Although SignalFire’s mission is not necessarily to pursue social good, they are focussed on what technology can feasibly be integrated into our current economy and what the impacts of that technology are. They focus on externalities to determine what medium and low stakes AI is here to stay. Through his understanding, AI used in linkedin and pinterests efforts would be considered low stakes while the COMPAS algorithm would be considered high stakes and not really ready to have a significant place in the world. When humans take the responsibility for finding specific, constructive uses for technology, it can shift from high stakes to medium stakes and low stakes.

How Meaningful Work and Diversity of Thought Empowers Societal Improvement

Being human and conscious means finding meaning in the world and one’s place within it. If people don’t think the work they are doing solves problems they care about, they’re going to be frustrated. In a related vein, people have bias for their own ideas. However, the assumptions people have about the world can be unfounded and incorrect. People may try to solve one issue, but another group might realize that the problem is something entirely different. In this day and age, with social media that polarizes us and important philosophical questions left unanswered, the world is in a state of mass confusion. Wouldn’t it be wonderful if we could have a planet where everyone feels like they are contributing to society and better ideas are being discussed and implemented because of it? Obviously, this is a utopian dream, but humanity can take steps in the right direction: towards diversity of ideas which goes hand in hand with job satisfaction. And, with all the ethical questions that need to be answered in this age of technology and information, transformation concepts like this are increasingly necessary.

Listening is underrated… and people aren’t reaching their full potential because of it. Humans do not always perceive the world rationally, even when they believe they are doing so. People need to be exposed to differing viewpoints to counteract their own confirmation bias and ego. The paper "Why Do Humans Reason? Arguments for Argumentative Theory" by Hugo Mercier and Dan Sperber describes an understanding of this detrimental behavior. The authors explain that "reasoning falls quite short of reliably delivering rational beliefs and rational decisions…[it] can lead to poor outcomes, not because humans are bad at it, but because they systematically strive for arguments that justify their beliefs or their actions” (Mercier). When someone has an explanation for something in their head, they have difficulty accepting information that goes against that idea. Cognitive dissonance, which occurs when someone holds two opposing view at once, is uncomfortable. So, people often avoid talking to those who disagree with them whenever possible. Everyone has their own view of what philanthropy looks like in the world. And thus, any tech company, whether it be a big corporation, startup, or nonprofit, has its own ideas of how its mission and products fit into that view. But, how can technologists be aware of the drawbacks and unintended consequences of their technology if they only view it in a positive light and don’t have anyone asking the tough questions?

People find meaning their work when they are listened to, and this meaning collectively lifts companies and communities. In a podcast titled “Waking Up with Sam Harris,” guest Johann Hari outlines the reasons for people’s attitudes towards work. In an extensive three year study, Gallup found that “87% of people don’t like what they’re doing most of their life,” and research by Sir Michael Marmot explains that “the single biggest factor that causes depression at work is low or no control over [one’s] job” (Hill). It’s not surprising that people don’t enjoy going to the same place everyday to do things that no one asks their opinion about. Their condemned to misery not necessarily because of the work, but because of the lack of control. When people morally disagree with the judgement of their superiors, viewing it as corrupt and ill intentioned, they are bound to hate the work assigned to them. To combat this, bike workers in Baltimore decided to set up a “democratic cooperative” (of which there are 10,000 in the US) where they “decided things by voting” and shared the good tasks, uncomfortable tasks, and profits. After doing this, they “talked about how they’d been depressed and anxious before but were not now” (Hill). Degating management tasks in a large enterprise is important, so realistically not every company can be a democratic cooperative, but voting and discussion really helped these employees. They did not change their careers, they just changed their workplace environment. Even small amounts of autonomy encourage individuals to be more productive with their work and in turn invest themselves in the success of the company. People are realizing that they don’t need an excessive amount of money or workplace status to feel fulfilled. They simply need to do work that aligns with their values and interests to be happy, so why should they settle for less?

To find meaningful work, young people in Silicon Valley are pushing for careers with transparency on company-wide decisions. An article titled “‘I Don’t Really Want to Work for Facebook.’ So Say Some Computer Science Students” by Nellie Bowles in the New York Times details how there is now a social stigma around working for Facebook since youth do not want to be responsible for negative consequences surrounding it. Career coach Chad Herst says that these students “are concerned about where democracy is going, that social media polarizes us, and they don’t want to be building it” (Bowles). This new generation of graduates are pressured by their peers, their quest to be fulfilled, and their moral consciousness to pursue work away from big tech companies, even if it means less money in their pockets. A company’s reputation has a significant impact on whether bright minds want to work there since people don’t want to be robots, doing what they’re told day in and day out. Is more money worth fueling potentially malevolent schemes against the world and sacrificing one’s ability to take rewarding risks and have jurisdiction over their decisions?

Engineers at certain places are outright refusing to work on projects that violate their beliefs. For example, in another New York times article titled “Tech Workers Now Want to Know: What Are We Building This For?” authors Kate Conger and Cade Metz write about Google employees’ dissatisfied response to their company’s involvement in national defence. Engineers are questioning whether their work will be used for “surveillance in places like China or for military projects in the United States.” They want transparency even if they aren’t directly involved because certain “infrastructure — like algorithms, databases and even hardware — underpins almost every product a company offers” (Conger and Metz). The staff disagree with their work being used as a weapon to censor or kill people who disagree with Google’s potentially unscrupulous partners. They are taking action on the issues by discussing, signing petitions, asking questions, and demanding transparency. In order to keep and please their accomplished workers, Google needs to consider the social impact of their decisions and their mission as a company, whether they want to or not. Without its accomplished employees, Google has no products, no competitiveness in the market, no capacity to achieve goals of any kind, and no profits! As the use of technology in certain situations becomes more and more questionable, individuals are taking responsibility for their own work and in turn holding those they are working for accountable.

Additionally, internal hiring practices and company culture can bring about social good because people are more invested in working for companies that are genuinely trying to listen to diverse views (and combat the effects of argumentative theory) so they can make better decisions and fair products. An article in The Atlantic titled “How Slack Got Ahead in Diversity” by Jessica Nordell delineates the intense focus tech giant Slack has on having diversity and making those individuals feel accepted. “At Slack, the absence of a single diversity leader seems to signal that diversity and inclusion aren’t standalone missions … but rather intertwined with the company's overall strategy” (Nordell). When people with differing views discuss ideas, they feel like they are contributing, and some may change their minds about issues and end up developing better, more ethically conscious products. Also, by involving their employees in diversity conversations themself, everyone takes responsibility for how the company actually achieves that level of awareness, inclusion, and understanding. In hiring, Slack focuses on “interpersonal phenomena like stereotype threat, in which people from stigmatized groups spend mental energy grappling with negative stereotypes about those groups” as well as interviewers “inadvertently favor[ing] candidates who resemble themselves” (Nordell). They recognize that some people have privilege in certain situations and others don’t. And, when potential hirees see that Slack is empathetic towards the issues they face, it is easier to feel included in the company, find meaning in their work, and contribute to the company’s goals.

Having a diverse workforce is a competitive advantage that drives productivity and profits as well as something individuals are actively asking for. The Institute for Public Relations published a study saying that “Nearly half of American Millennials say a Diverse and Inclusive Workplace is an Important Factor in a Job Search” (Nordell). People clearly care about having diversity, but it should not just be a numbers game. In order to have the inclusion aspect as well, certain companies have tried making a truly thoughtful effort to bring new ideas into the workplace, making everyone realize the advantage of diversity. Then, their employees can focus on listening and empathizing to defeat their own bias, have better ideas, and be successful.

In my personal experience, the most fulfilling part of working with Blueprint is discussing with my own project team and people from the nonprofit about the product. There’s so much that goes in to building something that truly serves a community in need. Tools need all sorts of thought. Whether that be from people who understand the limits of computer science development on one side or people who understand groups with specific goals on the other (like the homeless finding accessible housing or kids affected by the earthquake in Nepal being educated). Ideas about user experience, design, ethics, psychology, coding stacks, resources, and research on the market all go into making a product. And in these meetings, everyone present understands these concepts differently. But, when each brain brings something new to the table, usually better ideas come out of it and people find purpose in being involved.

When asked for a concrete definition of what technology for social good exactly was, Christine Robson, a project manager for Google Machine Learning said: “I hope society isn’t hoping that google is going to make the definition of what makes society good. That’s not the place of any single corporation. That’s the place of society” (Robson). Google spends a lot of time introspectively deciding what their goals are and trying to pursue meaningful projects, but, they’re bound to mess things up when pursuing high-stakes projects. Everyone has a different definition of what social good is and the ways in which technology will help them and their communities prosper. Companies can only do what they believe is right to best serve their customers and accomplish their missions, but in order to do so, individuals need to have the freedom to criticize products and ideas. AI for social good can be looked at in terms of a cost-benefit model by companies, but the general population needs to influence that model so society can progress.

The social good revolution comes from individuals recognizing the destructive potential of their work and ideas who in turn contribute to companies in order to change the world in some way. As psychologist Jordan B. Peterson says, finding meaning is equivalent to “accepting the terrible responsibility of life, with eyes wide open [and] voluntarily transforming the chaos of potential into the realities of habitable order.” One must “willingly undertaking the sacrifices necessary [in order] to generate a productive and meaningful reality” (Peterson). The impacts of technology and the role of humans in the future is an almost inconceivable issue to ponder. But, led by a desire to make the world better and find success in the long term, companies are recognizing the value of corporate social responsibility. And, when those goals fall short, employees with different perspectives can come together to figure out what the world actually needs and what it doesn’t. This helps everyone: when workers are able to contribute to discussions about best practices and high-level decisions, they find work more meaningful. This sort of system has a lot of potential, but obviously corruption, confusion, and human vices hinder everyone’s ability to be moral. All we can really do is listen to each other and try to build technology for good: so why not do so? The future is a little less terrifying when we realize that the empathy and innovation needed to make the world a better place already exists within every human being. To build a better future, be more human.

Further Listening

Works Cited

Angwin, Julia, et al. “Machine Bias.” ProPublica, ProPublica,

www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Boyd, Danah. “Google and Facebook Can't Just Make Fake News Disappear | Backchannel.”

Wired, Conde Nast, 18 Sept. 2017.

Brockman, John. “The Argumentative Theory.” The Argumentative Theory | Edge.org,

www.edge.org/conversation/the-argumentative-theory.

Gallup, Inc. “The World's Broken Workplace.” Gallup.com, 13 June 2017,

news.gallup.com/opinion/chairman/212045/world-broken-workplace.aspx.

Griffith, Erin. “The Ugly Unethical Underside of Silicon Valley.” Fortune, 28 Dec. 2016,

fortune.com/silicon-valley-startups-fraud-venture-capital/.

Gouran, Dennis S. “A Response to Hugo Mercier and Dan Sperber's ‘Why Do Humans

Reason? Arguments for an Argumentative Theory.’” Argumentation and Advocacy, vol. 48, no. 3, 2012, pp. 186–188., doi:10.1080/00028533.2012.11821767.

Hurlburt, Markx. “Software, Tech Talent, Diversity, and the Future of Everything | Mark

Hurlburt | TEDxFargo.” YouTube, YouTube, 28 Nov. 2017,

www.youtube.com/watch?v=gTHZ2FwnM-A.

Nordell, Jessica. “How Slack Got Ahead in Diversity.” The Atlantic, Atlantic Media Company,

30 Apr. 2018, www.theatlantic.com/technology/archive/2018/04/how-slack-got-ahead-in-diversity/558806/.

Olkun, Sinan. “Self-Compassion and Internet Addiction.” The Turkish Online Journal of

Educational Technology, vol. 10, no. 3, July 2011.

Pardes, Arielle. “Pinterest Wants to Diversify Your Search Results.” Wired, Conde Nast, 26 Apr.

2018, www.wired.com/story/pinterest-skin-tone-search/.

Pontin, Jason. “Three Commandments for Technology Optimists.” Wired, Conde Nast, 10 Oct.

2018, www.wired.com/story/ideas-jason-pontin-three-commandments-for-technologists/.

Raz, Guy. “Unintended Consequences.” NPR, NPR,

www.npr.org/programs/ted-radio-hour/662611757/unintended-consequences.

Rochetti, Akimbo Adrienne. “Technology for Social Good - Good Enough?” Wired, Conde Nast,

7 Aug. 2015, www.wired.com/insights/2013/11/technology-for-social-good-good-enough/.

Rosen, Rebecca J. “Truth, Lies, and the Internet.” The Atlantic, Atlantic Media Company, 30

Dec. 2011, www.theatlantic.com/technology/archive/2011/12/truth-lies-and-the-internet/250569/.

Schuler, Douglas A. “A Corporate Social Performance-Corporate Financial Performance

Behavioral Model for Consumers.” The Academy of Management Review, vol. 31, no. 3, 1 July 2006, pp. 540–558.

Shandal, Vinay, director. How Conscious Investors Can Turn up the Heat and Make Companies

Change. TED: Ideas Worth Spreading, www.ted.com/talks/vinay_shandal_how_conscious_investors_can_turn_up_the_heat_and_make_companies_change.

Silver, Darrell. “An Insider's Take on the Future of Coding Bootcamps.” TechCrunch,

TechCrunch, 26 Aug. 2017, techcrunch.com/2017/08/26/an-insiders-take-on-the-future-of-coding-bootcamps/.

Singer, Natasha. “The Silicon Valley Billionaires Remaking America's Schools.” The New York

Times, The New York Times, 6 June 2017, www.nytimes.com/2017/06/06/technology/tech-billionaires-education-zuckerberg-facebook-hastings.html?smid=pl-share.

Thompson, Nicholas. “Jeff Weiner on How Technology Accentuates Tribalism.” Wired, Conde

Nast, 15 Oct. 2018, www.wired.com/story/jeff-weiner-on-how-technology-accentuates-tribalism/.

Tufekci, Zeynep, director. Machine Intelligence Makes Human Morals More Important. TED:

Ideas Worth Spreading, www.ted.com/talks/zeynep_tufekci_machine_intelligence_makes_human_morals_more_important/transcript?referrer=playlist-the_inherent_bias_in_our_techn#t-553962.

“The Ugly Unethical Underside of Silicon Valley.” Fortune,

fortune.com/silicon-valley-startups-fraud-venture-capital/.

Wong, Julia Carrie. “Segregated Valley: the Ugly Truth about Google and Diversity in Tech.”

The Guardian, Guardian News and Media, 7 Aug. 2017,

2 views0 comments

Recent Posts

See All
bottom of page