top of page

WHAT THE TECH

How do we find meaning among the machines?

Hey there, I'm a computer science undergrad at Berkeley. Thinking about my opportunities for using my CS skills in the future, I find myself asking a lot of questions. How do I do work that is actually meaningful and helpful to people? And, how can technology bridge barriers between people and scale bright ideas?
This futuristic world we live in can be difficult to understand, but it is important to ask these key questions and focus on impact. This blog is called What the Tech because, frankly, What the Tech is Tech... and Life... and Everything... I'm not sure. However, in these blog posts you'll find my attempts to be a heckler (or techler haha) by questioning, challenging, and trying to understand what the tech is happening with today's biggest ideas.
Let's see where this takes us! :P

Home: About

PROJECTS

file-20180509-34021-1t9q8r0.jpg

PROJECT I

To Beep or Not to Beep: Why Understanding Human Consciousness Means Better Robots

Currently, the information processing, logical side of the human mind is the part that is mainly understood and used to make helpful computers, but more complexities exist in the subconscious level that prevent technology from becoming “human.” However, artificial intelligence has come a long way towards replicating creativity, analysis, and intelligence and even offers humans an opportunity to improve their lives by changing or uploading their brains. With all these technological advances, what will it take to have a future where robots and people both have consciousness? And, if this happens, how can these two groups best function together to maximize prosperity?

Screen Shot 2018-11-28 at 12.44.56 PM.pn

PROJECT II

Slidedeck on Technology and Philanthropy

A presentation of research related to corporate philanthropy, psychological ideas such as argumentative theory, and why advancements in technology have great potential to damage society. Project III is a much more developed version of this project.

Screen Shot 2018-11-28 at 12.55.50 PM.pn

PROJECT III

The Social Good Revolution: How Corporate Responsibility can Enable Technological Innovation and Beneficially Impact Society

Abstract: In this day and age, technology is affecting people in ways it never has before. Artificial intelligence is replacing human decision making in key areas, the sensational ways in which companies use technology incur short term gains while corrupting entire populations, and unmoderated sides of the internet decrease participant responsibility and hateful groups to reach others under the guise of anonymity. All these advances pose new and concerning ethical and moral questions we’ve never seen before. The decision to build technology with the benefit of society in mind may change from being the “right” thing to being the only way technologists, companies, and the people of the world can prevent self destruction. This social good revolution is on the horizon because companies like Uber and Lyft are becoming more competitive in the realm of total societal impact. Also, companies like Pinterest and LinkedIn are realizing where their algorithms fall short of serving the needs of their customers, while others like Google are hiring teams of ethicists and setting goals for themselves regarding their impact on the world. When technology companies and their engineers are aware of the unintended consequences of their new technology, they can build better products that make everyone better off and keep the company sustained in the long term. Mission-driven development is taking off because the future of the world is increasingly at stake. However, making an impact requires more than just intention. Argumentative theory explains that individuals must interact and compare ideas in order to dismantle their confirmation bias. People are starting to care more about working for companies that make ethical decisions. They can contribute by questioning corporate intentions, expressing their opinions, and feeling confident in the social impact of the products they build. Companies can also encourage this kind of culture among their ranks by aiming for diversity of thought while hiring and being open with their decision making. These efforts incentivize engineers to work for companies and make the technology they build better satisfy the mission.
Keywords: Technology, Corporate Philanthropy, Artificial Intelligence, Ethics of Technology, Mission Driven Development, Human Decision, Argumentative Theory, Confirmation Bias, Free Speech, Total Societal Impact, Corporate Social Responsibility, Pinterest, LinkedIn, Google, Slack Uber, Lyft, Algorithmic Bias, Diversity and Inclusion, Hiring Practices

Home: Projects
Home: Blog2
Search
  • Writer's pictureTechler

Rough Draft #2

5202 words/4000 :(

The Social Good Revolution: How Corporate Responsibility and Technological Innovation are Intersecting to Create the Future of Everything


The stories of robots that can take human jobs and artificial intelligence that decides what is and what is not fair aren’t just stories anymore. Science fiction is becoming reality. The technology we are building today determines the future and could destroy everything we know to be true, so it is time to start considering the role we should play in this development and the consequences - both positive and negative. It’s time to take some responsibility. Due to the destructive potential of technological advancements, companies must encourage collaboration and responsibility for unintended consequences in order to participate in the social good revolution. If technology companies don’t focus on the future and take tangible steps to build what will be beneficial for people, not just profitable, they will be unable to achieve long-term success. These days, the development of technology and its potential dark side lead to ethical dilemmas about its use. Since technology companies have such an impact, there are reasons why they should they be responding to these ethical questions and focusing their development around specific missions. Finally, the way to tangibly benefit society is to acknowledge argumentative theory and use collaboration, diversity, and iteration as a means of innovation and success.

The Growing Influence of Technology: Reasons to Pay Attention to the Dark Side

As shown by the increasingly complex ethical and political questions recent technology has brought about, the programs and devices engineers build have a greater potential to harm society and accentuate chaos. This leads into the questions of what is means to build technology with the good of society in mind and why doing so is so vital to the future.

In the past, successful technology has always led to the betterment of society even if there are short term drawbacks. Back in the industrial revolution, no one really asked the question of whether or not advances in steel, oil and electricity production would ultimately be good for society. These advances simply led to a better quality of life for people because they freed up time and reduced unnecessary human labour. Issues in society such as class divides, poverty, and corruption didn’t exist because of the technology itself. They arose from separate issues with wealth, power, and politics. During that time, philanthropists in these big corporations took on the responsibility of serving the needs of individuals that technology couldn’t necessarily meet. This was a separate endeavor from business. Nowadays, however, philanthropy is increasingly intertwined with business itself because technology has a greater effect on how individuals interact with the world.

Artificial Intelligence is an application of technology that hold great promise while posing difficult to understand problems. In her Ted Talk, titled “Machine Intelligence Makes Human Morals More Important,” Zeynep Tufekci explains that it is difficult to tell how AI makes decisions, which is especially alarming when those decisions greatly affect people’s lives. She says “we cannot outsource our moral responsibilities to machines.” “We need to cultivate algorithm suspicion, scrutiny and investigation” and ensure “accountability, auditing and meaningful transparency” of the algorithm (Tufekci). AI can be thought of as a “black box” of sorts. We can determine what goes in and what comes out, but we know little about what happens in between. Humans could simply leave all decisions to machines, but what happens when their idea of the “right” decision is different than a humans. Even as creations get smarter, there is still a need for humans to be responsible for them. Solving problems with AI does not mean that problems in society are solved or can be ignored.

When AI is used to predict people’s futures, the results pose questions about the morality of the technology’s role. During an event titled “AI for Social Good” hosted by a club I’m involved with as Berkeley called Blueprint, I introduced and facilitated a question and answer panel with accomplished individuals from multiple fields that are focused on using AI for the betterment of society. A speaker from the event named Josh Kroll is a postdoc specializing in governance and computer systems at the U.C. Berkeley School of Information. He discussed how in 2009, when HP first employed face detection, people realized that the software did not detect black faces very well. These kinds of mistakes are harmful to society because of the cultural baggage associated with them. When there is a gap between the data used to design systems and data is applied to, cases like this are sure to happen and offend people. However, even with careful review, Kroll comments that “even very small error percentages are going to show up some large number of times.” There is no perfect solution to prevent bias from happening, but there are edge cases that models are not trained on that can have significant effects on how the AI performs.

Furthermore, technology impacts society because AI becomes an issue when it determines how resources are allocated to groups or how those groups are represented in the real world. Kroll explains that there is a new mandate in california to assess people for their risk of reoffense and determine if they can have money-bail or not. An algorithm called COMPAS is currently used to assess this risk on a scale of 1-10, but a study done by ProPublica found the algorithm to be biased because “people of colour are arrested more often so they naturally have a higher risk of being rearrested” (Kroll). On one hand, this is just an algorithm that compares potential criminals with others before them and determines their score based on statistics. However, the numbers mean different things for different races in this case. Josh goes onto say: “Is the problem in your analysis or your data collection or is it an unfairness in the world? The world is not a terribly fair place and so maybe we shouldn’t hold technologies to some very high standards of fairness” (Kroll). COMPAS’s algorithm is supposed to be fair since it is purely based on data, but people disagree with its judgements. Building technology means we could redefine that standard of fairness. These questions still involve technology because the algorithms play a key role in the issue

A podcast by TED Radio Hour titled “Unintended Consequences” hosted by Guy Raz discusses the dark and destructive ways people can use technology that technologists often fail to foresee. Guest speaker Yasmin Green who did a TED talk titled “How Did The Internet Become A Platform For Hate Groups?” discusses her work trying to counteract terrorism and giving people access to information that can help them make informed decisions. Originally, teams she worked on envisioned the internet would “connect people to information [and] to each other,” for it was “going to transform democracies” and “empower populations” (Green). It’s natural to have an optimistic take on new, exciting technology. However, the intentions of technologists do not always pan out. The terrorist group ISIS took off because of their internet presence, radicalizing individuals because of the limited sources of information about the realities of terrorism. These issues are concerning and difficult to fix. Green says “it's easy and dangerous to say, well, there are good people and bad people because...the prescription is really punitive technologies or policies” We cannot just push these types of people to the so-called dark web or selectively remove their internet presence. Censoring people that disagree with the way society or internet companies view the direction the world should go isn’t as easy as simple as it sounds. Giving individuals access to unbiased, helpful information is a difficult issue. It’s really important in general to think about the potential use of technology by hate groups and people with bad intentions And, anticipating and correcting for the actions of these people and constantly rethinking technology is the only thing society can really do. Spotting problems early and working for the good of society as a whole is incredibly important

Another guest speaker James Bridle who did a TED talk titled “What Do Kids' Videos on YouTube Reveal About the Internet's Dark Side?” explains that ‘villainous individuals are making lengthy, sensational Youtube videos of people opening easter eggs and pirating content from kids’ TV shows in order to target children for views and ad revenue. Even more unsettling, some videos with titles that seem kid-friendly are actually unsettling and meant to scar children given the freedom to browse the site. On the surface, Youtube simply finds videos based on a customer’s interests and search terms in order to educate them. But, in practice, this technology has issues when people with harmful intentions can access it. Bridle comments that Youtube has “decided to optimize for reactions and sensation over other forms of verifying knowledge.” This seemingly simple way to get people on their site to make money from ads has resulted in numerous, hard to foresee questions about the harmful effects of sensationalism on society and whether or not low barriers to providing and receiving information accentuates false or misleading information. These are issues that affect everyone who has ever used the platform, and they are not simple to solve. Considering how detrimental some of the videos shown are, society may want to rethink the value of entertainment. People are looking to be entertained, but are they sacrificing truth in the process? Should Youtube really be responsible for this thought The tech we build is an experiment that we can then use the results of to change the tech and reflect on the faults within society.

Businesses have shifted focus from shareholders to the effects they have on society and how they are viewed by the people they serve. A Ted Talk titled “The Business Benefits of Doing Good” given by Wendy Woods discussed ideas businesses use to measure their impact and progress, as well as how those tools can be rethought. Companies are starting to focus on TSI (total societal impact) and CSR (corporate social responsibility) as opposed to TSR (total shareholder returns) because TSR does not determine a company’s success in the long run. In a study, she says “the oil and gas companies that are performing most strongly on TSI see a 19 percent premium on their valuation...When they do really well on things like minimizing the impact of their company on the environment and water, and when they have very strong occupational health and safety programs” (Woods). TSI is difficult to measure because it is subjective from person to person and changes over time. It is up to society to determine what that measurement is. Given that, companies that prioritize TSI and thinking about the long-term because customers are more driven to support them regardless of fads so their market will still be there. Prioritizing the long term over the short term and recognize that quarterly-driven decisions lack foresight for the far off future are destined to fail. She says “Thinking about business benefits of doing good makes people feel selfish,” but, “making money ethically, sustainable is something to be proud about.” From a business standpoint, if people can do good for society profitably, then why wouldn’t they. The difficult part is figuring out where exactly a business’s view of what society wants is different from what society actually wants and will pay for. This goes back to argumentative theory: the bias that technology-based businesses have towards their own ideas can prevent them from building technology that actually serves needs and correcting for faults in their products that harm society.

Being empathetic towards customers through various corporate philanthropy programs is a way for companies to make a name for themselves. For example, an article in ABC news titled “Ride Share Companies Embrace Election Frenzy” by Cathy Bussewitz details how Lyft and Uber have made an effort to do their part in getting people out to vote. Lyft is “working with Voto Latino … as well as non-profit organizations that help blind people and student veterans to distribute discount codes and identify where free rides are needed,” and Uber “is offering $10 off rides to the polls across the country and added a feature in its app that helps customers find their polling stations by typing in a home address” (Bussewits). These companies identified that not having transportation was a reason people weren’t voting and used their platform and influence to get more people voting. The underlying reason for these efforts on the part of both companies may be that they are using social work to differentiate themselves from the competition. Each company wants to regarded as the one that really cares about social issues, is empathetic of those in needs, and recognizes their power to help out. These days because the unethical practices of companies are exposed when their stories are heard through social media, acting in the best interest of society is increasingly important. Also, technology is transforming the world more quickly than ever before so there’s plenty of new, unexpected ways technology companies can help people.

How do non-profits and philanthropists ensure they are actually fulfilling a need? The tech industry is huge, and people are turning their ideas into companies and shipping out products more rapidly than ever before. According to an article by Erin Griffith in Fortune titled “The Ugly Unethical Underside of Silicon Valley,” “73 Billion dollars in venture capital was invested in U.S. startups in 2016” (), and according to CNN, “Apple, Alphabet, Microsoft, Amazon and Facebook are collectively worth nearly $3.3 trillion.” Silicon Valley start-ups are becoming so common and such a quick way for success, that people are often building businesses without putting significant thought into how their business will function in society. This rule breaking culture is the norm in silicon valley, and sometimes the decisions start-ups make are outright unethical. For example, “Skully, the failed maker of smart motorcycle helmets, [are] being sued for fraudulent bookkeeping,” “Faraday Future and Hyperloop One, ambitious, well-funded companies [are] now tainted by lawsuits and accusations of, respectively, overhype and of mismanagement,” and “there’s less transparency as companies stay private longer (174 private companies are each worth $1 billion or more), and there’s an endless supply of legal gray areas to exploit as technology invades every sector”. Basically these companies are faking it to gain a competitive edge and hurting society in the process. This article is a telling description of what exactly the culture is like in Silicon Valley and why some of these ideas are very harmful. Start-ups seem productive and beneficial to society with their quickly implemented business plans and products that ship as fast as possible. These companies have to cut corners to generate profit because they’re not necessarily beneficial to people in the long term, and people don’t want to support something that impacts them in a negative way.

there is an issue with search engines and websites that rely on user preferences to create echo-chambers of opinion and be biased towards certain groups. An article by Arielle Pardes titled “Pinterest Wants to Diversity Your Search Results” in Wired details how Pinterest is implementing a simple feature for people to select their skin tone in order to get more helpful search results. She explains that “it’s the very beginning of a longer journey toward bringing greater diversity to Pinterest’s platform, through showing different complexions, body shapes, disabilities, and ages. Those are complex problems,” but, “the whole point of adding visual filters was to remove the barriers to content discovery” (Pardes). This tool obviously isn’t perfect since it relies on machine learning, but this is an example of a very controlled, small scale approach to solving a diversity problem. Brute force computing an more data isn’t the answer. This issue is going to be solved by people who really understand their product or resource and audience and are trying to provide the best experience to those people. Pinterest recognizes limitations of their technology and is careful with their search for a solution in order to beneficially impact their users as much as possible.

In an interview for Wired titled “How Technology Accentuates Tribalism,” CEO of LinkedIn Jeff Weiner describes how his company is trying to understand the effects they have on minorities due to the pre-existing networks of most of their patrons. it feels like every week there's another headline that is talking about how some of this stuff is going in the wrong direction. Linkedin’s mission is to allow people to connect with others and find jobs. But in order to do so, they need to understand the unexpected impact of their technology. Problems occur when when the platform provides “more and more opportunity for those that went to the right schools, worked at the right companies, and already have the right networks” (Weiner). The world can be very unfair at times, when people who already have opportunity seem to be given more and more regardless of their actual merit. It doesn’t seem fair that certain people should have to work so much harder than others to get to the same place, and technology makes it even harder in some cases LinkedIn implemented a button that allows people to ask for referrals from others in their network, but they realized that a lot of very capable people without networks couldn’t use the button to their advantage as well. So, they set up the Career Advice Hub so people could ask questions and find mentors. This is bridging the gap between people that have resources and people who don’t. LinkedIn is trying to recognise the needs of the underdog and

also help companies find more diverse candidates to hire. Connecting the world, spreading ideas, and giving minorities opportunities to have a voice has shifted into the hands of technology companies, so it is vital that they take responsibility for their role.

During an event titled “AI for Social Good” hosted by a club I’m involved with calledmy Blueprint, I introduced and facilitated a question and answer panel a with accomplished individuals from multiple fields that are focused on using AI for the betterment of society. Christine Robson is a product manager for Google's machine learning teams. She’s done work in healthcare with detecting cancer by looking at lots of images and recommending readmittance to the ER based on many factors (as opposed to a black box diagnosis) that doctors can look into as well as predicting earthquake aftershocks and locating whales underwater by tracking their calls. These are examples of AI that seem self-evidently good, and Christine notes that Google has a set of public “Google principles.” Ethicists also work on their teams and employees do their best to determine all the ways their technology could go wrong in order to make the best decisions possible. This is a commendable pursuit, but there are always consequences of technology that cannot be foreseen. Some of these consequences are life changing in both good and bad ways.

The final speaker was Ilya Kirnos, the CTO and founding partner of SignalFire, a venture capital firm with a focus on technology and AI investments. His firm consists of engineers with a deep understanding of what makes technology successful that have large endowments and invest in 10 year increments. He explained that there were two types of AI. First, underhyped low and medium stakes AI such as Google ads and search features that have few consequences if they fail. Second, overhyped high stakes AI such as self-driving cars and predictions in medicine that could have very bad consequences if they fail and must first be proven in low stakes cases. This differentiation is key to determining what is undeniably helpful to people because it saves them time or is something they are willing to pay for versus something that seems exciting but is not yet fully understood. The world simply isn’t ready for this type of technology all at once. Ilya and his company decided to focus on a certain sweet stop for self driving vehicles and invest in autonomous forklifts. These machines cause about 85 fatalities and 34,900 serious injuries each year, so in the short run jobs are lost in the industry but in the long run lives are saved and profit is made. Although SignalFire’s mission is not necessarily to pursue social good, they are focussed on what technology can feasibly be integrated into our current economy and what the impacts of that technology are. They focus on externalities to determine what medium and low stakes AI is here to stay.

Humans have unconscious bias towards their own ideas and views of the world as well as towards people like them. But, it is when people are exposed to new ideas that they actually understand how their skills, connections, and influence really affect the world. Bias is hard to detect and much more difficult, perhaps impossible to counteract. This is due to the fact that people live in their own bubbles. It’s so much easier to think that the way one lives their life is good or has no effect on the world, but in reality, does everything that every person does matter? Since people have such vast networks nowadays and technology has such a significant, difficult to understand impact on people, actions matter more than ever. But, people are also increasingly polarized. Having this social media and bubble of the tech industry means that the people with the greatest influence are surrounded by others like them and are worlds aways from the majority of people: namely those that are negatively affected by the “solutions” they are building (as confirmed by our uber driver). Unconscious bias among the ranks of technologists is a significant factor in why tech seems to create more problems than it solves.

Humans do not always perceive the world rationally, even when they believe they are doing so. People need to be exposed to differing viewpoints to counteract their own confirmation bias, propensity towards sunken-cost fallacy, and ego. The paper "Why Do Humans Reason? Arguments for an Argumentative Theory" by Hugo Mercier and Dan Sperber describes an understanding of this detrimental behavior. The authors explain that "reasoning falls quite short of reliably delivering rational beliefs and rational decisions…[it] can lead to poor outcomes, not because humans are bad at it, but because they systematically strive for arguments that justify their beliefs or their actions” (Mercier). When people have an explanation for something in their head, they have difficulty accepting information that goes against that idea. Cognitive dissonance, which occurs when an individual holds two opposing view at once, is uncomfortable. So, people often avoid opposing sides of arguments. Everyone has their own view of what philanthropy looks like in the world, and tech companies, including startups and nonprofits, have their own ideas of how their mission and their products fit into that view. How can technologists be aware of the drawbacks and unintended consequences of their technology if they only view it in a positive light?

Additionally, internal hiring practices and company culture can bring about social good both because people want to work for companies that hire ethically and want to hire employees that are really going to help them accomplish their goals. For example, Slack’s method of social good is hiring people from diverse backgrounds. , one of their interpretations of social good in society is hiring people from diverse backgrounds so that the hiring process is fair and the company has a better understanding of their products because of the diversity of voices. An article in The Atlantic titled “How Slack Got Ahead in Diversity” by Jessica Nordell focuses on the intense focus Slack has on genuinely having diversity in their workplace and having those individuals feel accepted. “At Slack, the absence of a single diversity leader seems to signal that diversity and inclusion aren’t standalone missions, to be shunted off to a designated specialist, but are rather intertwined with the company's overall strategy” (Nordell). Social good should be a company wide mission, not an afterthought. By involving their employees in diversity conversations everyone takes responsibility for how the company actually achieves that mission. In hiring, the company focuses on “interpersonal phenomena like stereotype threat, in which people from stigmatized groups spend mental energy grappling with negative stereotypes about those groups” as well as interviewers “inadvertently favor[ing] candidates who resemble themselves, and if criteria for a job are ambiguous, interviewers may mentally rejigger those criteria to fit whatever a favored candidate has” (Nordell). They recognize that some people have privilege in certain situations and others don’t. When potential hirees see that Slack is empathetic towards the issues they face, it is easier to feel included in the company. having a diverse workforce is a competitive advantage that drives productivity and profits as companies sell their products and services to a broad population. The Institute for PR published a study saying that “Nearly half of American Millennials say a Diverse and Inclusive Workplace is an Important Factor in a Job Search.” People clearly care about having diversity, but diversity should not just be a numbers game. It should be a genuine, thoughtful effort to bring new ideas into the workplace in order to understand issues groups face and solve problems better.

When asked for a concrete definition of what technology for social good that companies are trying but... “I hope society isn’t hoping that google is going to make the definition of what makes society good. That’s not the place of any single corporation. That’s the place of society” (Robson). Google spends a lot of time introspectively deciding what their goals are and trying to pursue meaningful projects, but, they’re bound to mess things up. Everyone has a different definition of what social good is and the ways in which technology will help them and their communities prosper. Companies can only do what they believe is right to best serve their customers and accomplish their missions, but in order to do so, individuals need to have the freedom to criticize products and ideas. AI for social good can be looked at in terms of a cost-benefit model, but people need to influence that model so society can progress.

Other ideas for how individuals influence high-tech through questioning their employers and trying to be part of the process.

The competitive advantage of diversity

Ppl with a variety of experience that understand the world in different ways

Fake it till you make it

50/50 - you don’t know the value that may come form ppl

Change conditions to be like yourself

Giving ppl/ideas a chance

Cultural fit - no equality of opportunity

https://www.nytimes.com/2018/11/15/technology/jobs-facebook-computer-science-students.html

People really care about doing meaningful work

Losing reputation has a big impact - the youth are deciding (need the best minds)

This is a + to stakeholder theory

Diverse, smart people are what will make you succeed

People see that their mission is to show ppl ads

“Career coaches said they had tech employees reaching out to get tips on handling moral quandaries. The questions include “How do I avoid a project I disagree with?” and “How do I remind my bosses of the company mission statement?”

Ppl don’t want to just go with the name, they want to meet the team

“They’re concerned about where democracy is going, that social media polarizes us, and they don’t want to be building it,” Mr. Herst said. “People really have been thinking about the mission of the company and what the companies are trying to achieve a little more.”

“ Michael Seibel, who leads Y Combinator. “The worst thing that can happen to you is you get a job at Google.” He called those jobs “$100,000-a-year welfare” — meaning, he said, that workers can get tethered to the paycheck and avoid taking risks.”

“the social stigma of working for Facebook began outweighing the financial benefits.

“Defense companies have had this reputation for a long time,” she said. “Social networks are just getting that.”

Project maven - google employees signed a petition to no weaponized AI

Early motto was don’t be evil

Image classification on footage collected by drones to fight insurgency

Saw it as particularly risky - deciding whether to make a drone strike

Should be human

Ethical conversation as well as google’s image

Tech employees have a lot of power and represent other people

Companies are wondering how their product will be used

“engineers and technologists are increasingly asking whether the products they are working on are being used for surveillance in places like China or for military projects in the United States or elsewhere.”

“That’s a change from the past, when Silicon Valley workers typically developed products with little questioning about the social costs.”

What has changed and why?

Need to understand thinking even if you don’t agree with decision - need transparency

Need to know that there’s no corruption and that people are really being listened to.

Clarify made a secret room and wouldn’t tell their employees what they were doing

It was for project maven

One engineer quit the project immediately after a meeting with the Defense Department where killing was discussed in frank terms, they said.

“You can think you’re building technology for one purpose, and then you find out it’s really twisted,” Ms. Nolan said

Even if you think you know, they could be lying to you

It’s up to workers in companies to fight back and really hold their bosses accountable and make sure their voices are heard

Parts of google’s systems (like colossus) contribute to everything google does, so even if involvement is not direct, it can still harm

“When publishing new work, researchers rarely discuss the negative effects. This is partly because they want to put their work in a positive light — and partly because they are more concerned with building the technology than with using it.”

The Future of Computing academy consists of 46 researchers

Lip-reading introduced by Google Brain and DeepMind which could be used to help people with speech impediments communicate better or, what seems to me more likely, allow for better surveillance

Actionable items:

“ calling on peer-reviewed journals to reject papers that do not explore those downsides.”

We’re all in this together to recognize what tech we actually want to have in the world

Open research may not be the best idea

“M.I.T. Media Lab, recently built a system called Deep Angel, which can remove people and objects from photos.”

This all makes it much easier to have fake news, control people, and watch people

This is terrifying

4 views0 comments

Recent Posts

See All
bottom of page