top of page

Multimodal Analysis 1

Throughout the years, technology has become increasingly advanced. There are so many things one can do on the internet nowadays: make purchases, view content, search the web, etc. However, as technology and the internet have evolved, there is undoubtedly an increase in surveillance and data collection. 


When we go on the internet, everything we do produces data, which has the possibility of being tracked and collected. For example, even a “simple web search from even the most unsophisticated of smart phones generates a lengthy record of new data” (Cheney-Lippold, 2019). Those who can benefit most from this data are the large, well known companies as well as the government, who can use it to do actions such as targeted marketing towards certain groups or, in the case of the government, to stop potential crimes. 


In fact, many big companies, such as Google, Microsoft, and Yahoo, went on to buy digital marketing companies. By doing this, it allowed them to collect even more data and information on their users. 


But what exactly happens with this data that is collected? Using algorithms, companies can observe, analyze, and process the data to profile us and assign us an identity. Algorithms are commonly used by these companies to figure out what people are like based on their data and behaviors online. By finding out more about their users, they can personalize their users’ experiences on their platforms. 


These companies are not really trying to figure out personal information or demographics - that does not matter to them as much. Instead, they want to figure out who people are based on their searches, their online purchases, the websites they visit, and more. 


In We Are Data, Cheney-Lippold explains that to these algorithms, a “user is a ‘man’ according to how closely ‘his’ data stacks up to preexisting models of ‘man’. These models are what [he] call[s] measurable types, or interpretations of data that stand in as digital containers of categorical meaning” (Cheney-Lippold, 2019). To elaborate, a user’s online behavior - including things like searches, visited websites, and everything else they might do on the internet - would be collected and processed. How much this data measures up to a measurable type would determine who that user is seen as on the internet.  

However, an individual’s measurable type might not actually represent who they are in real life. For instance, in We Are Data, Cheney-Lippold describes how Google assigned his female scientist friend “a ‘gender’ by treating her web-surfing data as inputs. Then, Google subsequently spoke for her data according to how well it statistically stacked up to its quantitative models” (Cheney-Lippold, 2019). This female’s searches and behavior online had been read as masculine, and is profiled as such by Google’s algorithms. Her future online experiences, such as getting advertisements or video recommendations, would be tailored towards this online identity that Google has applied to her. 


Algorithms certainly affect how people view the internet. What one person might see online is completely different from what another might see. There are certainly good parts as well as conveniences when it comes to algorithms and companies collecting data. Users can discover more content that is relevant to them, and their whole online experience is tailored towards what they do online. 


Although the algorithms that companies use do show us offerings that they think we will like and want to see, it can be worrisome to think about the ways they can use this influence over us. Their “economic, political, and cultural agendas behind their suggestions are hard to unravel. As middlemen, they specialize in shifting alliances, sometimes advancing the interests of customers, sometimes suppliers: all to orchestrate an online world that maximizes their own  profits” (Pasquale, 2016). 


One example of this is “Facebook's news algorithm, which enhances the so-called "filter bubble" of information. The social media giant's news algorithm tailors updates and content to each user, which could keep users from receiving information or news from sources that challenge their worldview” (Johnson, 2017). The content and news that people take in on Facebook, as well as other social media platforms, is so personalized to their own opinions and lives that it would be very unlikely for them to see opposing views. This algorithm would be especially controversial when it comes to political or social issues, as it makes the social media platform look very biased. In addition, this could definitely come close to getting into the censorship territory. For many, it seems like “Facebook is becoming an echo chamber that prevents us from being confronted with opinions we don't agree with” (Nejrotti, 2016). 


Here is a video, featuring Eli Pariser, that gives a more in depth view of news algorithms.

DTC356 Multimodal Analysis: Text
DTC356 Multimodal Analysis: Video

It is completely possible for other companies, not just Facebook, to withhold or even excessively show certain information or news to people through algorithms. Those in control of these algorithms could have an opportunity to manipulate what people see, with the possibility of having an agenda behind it. 


The use of algorithms could definitely introduce biases as well, as stated previously. One example of an algorithmic bias was by Amazon, who tried creating a machine learning tool that would help with the hiring process and narrow down the large amount of resumes they received. look at patterns from the resumes of previous successful hires and search for those patterns in the incoming resumes. However, “Amazon’s computer models were trained to vet applicants by observing patterns in resumes submitted to the company over a 10-year period. Most came from men, a reflection of male dominance across the tech industry” (Dastin, 2018). The tool they created was taught to recognize a vast amount of phrases and words. It had penalized resumes that included the word “women” in it, while favoring candidates who used descriptive language that was commonly found in men’s resumes. As a result, there was a very large disparity when it came to hiring females. It is always possible that an algorithm can create some discrepancy or bias in the information it is sorting out. Cheney-Lippold states that an “algorithm might disadvantage some data while privileging others, through either technological failure and/or an innate bias of the algorithm’s authors” (Cheney-Lippold, 2019).


There are also many other examples where algorithms and machine learning did not quite meet its full potential or do what it was supposed to do. Below is a TedTalk video, where Joy Buolamwini talks about her experience with a face recognition tool whose algorithms failed to recognize her features. 

DTC356 Multimodal Analysis: Text
DTC356 Multimodal Analysis: Video

Algorithms are a very complex and innovative idea that comes with its positives and negatives. Companies can collect data and use algorithms to tailor an individual’s online experience, which also has its upsides and downsides in itself. It can also cause a disadvantage to some, due to algorithmic biases. Algorithms, especially the ones used by the companies we interact with the most, can really shape how we see the world. There is a quote from The Black Box Society which states that “Facebook defines who we are, Amazon defines what we want, and Google defines what we think” (Pasquale, 2016). It truly demonstrates just how powerful these algorithms are and how much they affect our daily lives.

Below is another TedTalk video, featuring Robin Hauser, who touches on biases in algorithms as well.

DTC356 Multimodal Analysis: Text
DTC356 Multimodal Analysis: Video

Works Cited

[Big Think]. (2018). How news feed algorithms supercharge confirmation bias | Eli Pariser [Video file]. Retrieved from https://youtu.be/prx9bxzns3g 


Cheney-Lippold, J. (2019). We are data algorithms and the making of our digital selves. New York, NY: New York University Press.

Dastin, J. (2018, October 10). Amazon scraps secret AI recruiting tool that showed bias against women. Retrieved from https://www.reuters.com/article/us-amazon-com-jobs-automation-insight/amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK08G

Johnson, C. (2017). How algorithms affect our way of life. Retrieved from https://www.deseret.com/2017/3/10/20607853/how-algorithms-affect-our-way-of-life#chances-are-youve-heard-of-algorithms-over-the-years-but-experts-say-everyone-needs-to-become-aware-of-what-they-are-and-how-they-stand-to-change-life-on-earth-in-the-future

Nejrotti, F. (2016). Facebook's Filter Bubble Is Getting Worse. Retrieved from https://www.vice.com/en_us/article/vv73qj/facebooks-filter-bubble

Pasquale, F. (2016). The black box society: the secret algorithms that control money and information. Cambridge, MA: Harvard University Press.

[TED]. (2017). How I'm fighting bias in algorithms | Joy Buolamwini [Video file]. Retrieved from 

https://youtu.be/UG_X_7g63rY


[TED Institute]. (2018). Can we protect AI from our biases? | Robin Hauser | TED Institute [Video file]. Retrieved from https://youtu.be/eV_tx4ngVT0

DTC356 Multimodal Analysis: Text
bottom of page