top of page

WHAT THE TECH

How do we find meaning among the machines?

Hey there, I'm a computer science undergrad at Berkeley. Thinking about my opportunities for using my CS skills in the future, I find myself asking a lot of questions. How do I do work that is actually meaningful and helpful to people? And, how can technology bridge barriers between people and scale bright ideas?
This futuristic world we live in can be difficult to understand, but it is important to ask these key questions and focus on impact. This blog is called What the Tech because, frankly, What the Tech is Tech... and Life... and Everything... I'm not sure. However, in these blog posts you'll find my attempts to be a heckler (or techler haha) by questioning, challenging, and trying to understand what the tech is happening with today's biggest ideas.
Let's see where this takes us! :P

Home: About

PROJECTS

file-20180509-34021-1t9q8r0.jpg

PROJECT I

To Beep or Not to Beep: Why Understanding Human Consciousness Means Better Robots

Currently, the information processing, logical side of the human mind is the part that is mainly understood and used to make helpful computers, but more complexities exist in the subconscious level that prevent technology from becoming “human.” However, artificial intelligence has come a long way towards replicating creativity, analysis, and intelligence and even offers humans an opportunity to improve their lives by changing or uploading their brains. With all these technological advances, what will it take to have a future where robots and people both have consciousness? And, if this happens, how can these two groups best function together to maximize prosperity?

Screen Shot 2018-11-28 at 12.44.56 PM.pn

PROJECT II

Slidedeck on Technology and Philanthropy

A presentation of research related to corporate philanthropy, psychological ideas such as argumentative theory, and why advancements in technology have great potential to damage society. Project III is a much more developed version of this project.

Screen Shot 2018-11-28 at 12.55.50 PM.pn

PROJECT III

The Social Good Revolution: How Corporate Responsibility can Enable Technological Innovation and Beneficially Impact Society

Abstract: In this day and age, technology is affecting people in ways it never has before. Artificial intelligence is replacing human decision making in key areas, the sensational ways in which companies use technology incur short term gains while corrupting entire populations, and unmoderated sides of the internet decrease participant responsibility and hateful groups to reach others under the guise of anonymity. All these advances pose new and concerning ethical and moral questions we’ve never seen before. The decision to build technology with the benefit of society in mind may change from being the “right” thing to being the only way technologists, companies, and the people of the world can prevent self destruction. This social good revolution is on the horizon because companies like Uber and Lyft are becoming more competitive in the realm of total societal impact. Also, companies like Pinterest and LinkedIn are realizing where their algorithms fall short of serving the needs of their customers, while others like Google are hiring teams of ethicists and setting goals for themselves regarding their impact on the world. When technology companies and their engineers are aware of the unintended consequences of their new technology, they can build better products that make everyone better off and keep the company sustained in the long term. Mission-driven development is taking off because the future of the world is increasingly at stake. However, making an impact requires more than just intention. Argumentative theory explains that individuals must interact and compare ideas in order to dismantle their confirmation bias. People are starting to care more about working for companies that make ethical decisions. They can contribute by questioning corporate intentions, expressing their opinions, and feeling confident in the social impact of the products they build. Companies can also encourage this kind of culture among their ranks by aiming for diversity of thought while hiring and being open with their decision making. These efforts incentivize engineers to work for companies and make the technology they build better satisfy the mission.
Keywords: Technology, Corporate Philanthropy, Artificial Intelligence, Ethics of Technology, Mission Driven Development, Human Decision, Argumentative Theory, Confirmation Bias, Free Speech, Total Societal Impact, Corporate Social Responsibility, Pinterest, LinkedIn, Google, Slack Uber, Lyft, Algorithmic Bias, Diversity and Inclusion, Hiring Practices

Home: Projects
Home: Blog2
Search
  • Writer's pictureTechler

Self Assessment II

The discussion forums were very helpful because all the clusters I chose were very interesting to me. They were also somewhat interrelated, so I can see some of the ideas from those discussions going into my final project word for word. Even the clusters that weren't very related such as the one's about memes, race, and culture I still found a way to incorporate somehow.

The library exercises were less helpful since I don't think I had the right search terms and ended up not finding helpful sources. I was mostly on the hunt for facts and studies, but this wasn't very inspiring and I dug myself into a hole a little bit.

The research slide deck was really helpful for organizing my ideas and seeing how other people reacted to them.

The expanded prospectus and annotated bibliography wasn't super helpful since I already had an idea about what I wanted to research next, and I already had a lot of sources that I had annotated in bullet point form.


I think I'm getting stronger as a writer because it's a lot easier to sit down and start writing about what is significant about what I'm reading. The writing muscle is definitely something that can be trained, and I can realize when I'm confusing myself and need a breath of fresh air by approaching the topic from a new angle.

Once I come up with a basic argument for an idea, I feel like I kind of give up on it or get bored with it and don't put in as much effort going forward to question what is wrong with my thinking. Also, I find myself less willing to do more research once the bulk of it is done because I don't want to do more work and follow additional leads. I'm following the sunken cost fallacy (which I'm researching haha) a little bit because I invest myself in a certain train of thought and don't want to undermine that work and create more work for myself by investigating counter arguments.


Arc 1

Humans are complex creatures. How can we act with integrity and morals to do good in the world. What underlying philosophies will help? What innate psychological characteristics hold us back?

Ok.. given that we know the 3 commandments, what do these look like in practice. How do we orient our lives towards social good.

Diversity helps us see different opinions. How do different opinions help educate us (this still doesn’t make complete sense). Who takes responsibility for building tech for social good? Is it companies, engineers, consumers? How do we hold these people responsible.

What does it mean to allow oneself to be regulated by governments/industry standards/ consumers. How do we balance our own internal corruption with the corruption of other people.

3 views0 comments

Recent Posts

See All
bottom of page