Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

TikTok doesn't read your mind, it makes your mind

17 декабря 2021 г Hi-network.com

TikTok doesn't read minds, butThe New York Timeswould like you to believe it does.

In a December 5th article inThe Times, columnistBen Smith, writing for the paper's The Media Equation section, describes a leaked document theTimesobtained from an unnamed source inside the company that reveals algorithms supposedly used to drive engagement on TikTok. 

While there have been discussions of TikTok's algorithms for deciding what content is seen, Smith writes that the leaked document "offers a new level of detail about how the algorithm works."

The article has several omissions. One is the lack of an explanation of how and why the algorithm leads to particular content consumption. While the algorithm appears to assign scores to videos based on metrics such as user "likes," comments, and times played, there is no discussion of what function the video is seeking to optimize, such as total engagement (hours spent), for example, or total spread of video content, its "viral" quality.

More tellingly,The Timesarticle employs misleading language common in media discussions of artificial intelligence and other algorithmic techniques, ascribing things such as "mind" and desire to what are merely engineering feedback loops. 

Also:App Annie predicts TikTok to reach 1.5 billion active users in 2022

The Times's headline, "How TikTok Reads Your Mind," is followed by references to how the algorithm is detecting people's intent:

The document offers a new level of detail about the dominant video app, providing a revealing glimpse both of the app's mathematical core and insight into the company's understanding of human nature - our tendencies toward boredom, our sensitivity to cultural cues - that help explain why it's so hard to put down. 

However, nothing about human nature is revealed in the discussion of the algorithm in question. The algorithm, based on the document obtained by Smith, appears to be a very simple calculation of factors as follows:

Plike X Vlike + Pcomment X Vcomment + Eplaytime X Vplaytime + Pplay X Vplay

The phrases "like," "comment," "playtime," and whether played at all or not, are presumably references to the various metrics assigned to videos. Smith doesn't explain the "P" or "V or "E," although it's implied in the article that P stands for a prediction "driven by machine learning," without elaborating. 

Also:Asynchronous videos: Can the TikTok generation save us from meeting overload?

As such, the algorithm is summarizing predicated metrics of content, regardless of human mentality.

The AI firm DeepLearning.AI, founded by researcher Andrew Ng, on Wednesday discussed Smith's article in the company's newsletter, TheBatch. The article suggests that "V" may stand for "value," meaning, a weight applied to each of the metrics in terms of their importance in some final score. 

Flow-chart of a supposed recommendations algorithm used by TikTok, as reprinted by The New York Times from an internal document leaked to the Times by a TikTok staffer. 

The New York Times

Despite the omissions, it's clear that the system is not predicting mentality, it is presumably mapping pieces of content to predicted outcomes in terms of likely views and/or engagement. 

To presume that there is a mind in the user who expresses preferences by clicking, as doesThe Times's Smith, is conjecture that may not be supported by the facts.

In statistical terms, for a machine to read a user's mind would imply the notion of a "prior": something that exists before a measurement. However, what is revealed as a mind, in the form of expressed preference, is the opposite statistical notion, a "posterior," something that exists only after measurement. 

It seems more likely that the mind is something inferred after the fact, if it has any meaning at all. Consider the system that constitutes TikTok. Users can upload and view various short-form videos. As users submit videos and consume videos, they are presented with more such videos. In a sea of videos, an individual is clicking or not clicking, engaging or not engaging. 

Also:TikTok Boom, book review: The rise and rise of YouTube's younger, hipper competitor

A user's mentality, or emotions, is in a sense irrelevant because the system is not asking for volunteered ideas. Rather, the user is being asked to respond to a finite set of choices, and the system gets better and better, presumably, at repetitively stimulating that activity, leading to higher and higher numbers of daily active users, which, according toThe Times, are now on the order of one billion, and forecast to rise to 1.5 billion in 2022.

All that suggests at best TikTok is a highly effective behavior machine, a machine for shaping behavior on TikTok, rather than a mind-reading device. 

Taking the analysis a step further, studies of TikTok in academic literature suggest a very mixed view of the algorithm at work. 

In some cases, the company's algorithm operates not merely to propagate things that might be popular but also to give exposure to things that might not be as popular. 

For example, two Carnegie-Mellon researchers, Daniel Le Compte and Daniel Klug, this year interviewed social activists who use TikTok to present videos to bring attention to social causes. They related that activists expressed a preference for TikTok over other social media because their videos were more widely viewed than was the case on other platforms:

Some participants noted that the usage of TikTok helps get their message out beyond their own "circle": "So I was able to focus what my following was for rather than, um, Facebook, where it's like just friends of friends or family people you meet in real life" A main limiting factor of other platforms, that participants noted, was the necessity for audience members to connect or follow a creator before they would be able to see the content, unless in the unlikely event that the content was "promoted" through ads, or went viral.

While TikTok may circulate things beyond just what a person expresses a preference for, it also appears to be true that TikTok activity is clustered around things that groups of people approve in large numbers regardless of what an individual may feel or think about them. 

A 2019 study by researchers at the Guilin University of Electronic Technology in China and the University of Oslo, Norway, looked at numbers of views and likes on TikTok videos. 

The authors concluded that most of what gets played is what has been "liked" by users:

In particular, the number of views and the number of likes have a very high correlation coefficient which is 0.91, meaning that a video which is popular in terms of number of views is very likely to be popular in terms of number of likes and vice versa.

Again, whether users are continually shown more and more stuff that they have been shown and have clicked on, is a matter of an engineering feedback loop, not an instance of mind reading. 

And a third study, this year, by researchers at Boston University, Binghamton University, and University College, London, makes one wonder whether the recommendation "engine" of TikTok is doing anything at all.

The study examined 400 TikTok videos for "understanding indicators [that] make a short video go viral."

The authors labeled the videos for ten different factors that might affect virulence, or, as they term it, "virality," the propensity of a video to be "liked" by users. Those factors ranged from whether the creator of the video was "popular," meaning, had a large number of followers; the style of the video, such as using the "duets" feature in TikTok to re-mix someone else's dance track; and emotional content, among others. 

The authors also sought to measure the role of the recommendations algorithm alongside those other factors. They did so by noting how many videos used the relevant hashtags for promotion, and how long a video had been in the system, given that viral videos tend to go viral soon after being uploaded. 

The authors then used all these factors in a variety of very simple machine learning and stats models that can classify things, including Random Forest, Support Vector Machines, Logistic Regression, Gaussian Bayesian, and Decision Trees.

The result? Their classifiers, to varying degrees, were able to "identify the most important features that discriminate between viral and non-viral videos." The top factor, they found, was the popularity of the creator. The second biggest factor was whether the video has close-ups or not, a finding that "matches previous studies on image memes suggest- ing that highly viral memes are more likely to use a close-up or medium-shot scale."

Hence, popularity reinforces popularity, and people respond to close-ups. None of that is mind reading. Meanwhile, the recommendation system, they found, had the lowest value as a predictor of virality. 

"The features in RH2 (Recommendation System) have the lowest AUC [area under the curve] among the three RHs [research hypotheses], as low as 0.71," they write. "In fact, the accuracy obtained on these features is also quite low (0.56), suggesting that they are not a good predictor of a video's virality." 

The authors also note, anecdotally, the popularity of cat videos. 

Hence, studies suggest TikTok may seek to impose some videos upon its users regardless of desire or user mentality, but that a lot of TikTok activity is a somewhat obvious popularity contest and herd mentality. None of that equates to mind reading. 

On the contrary, the research suggests TikTok may shape mental attitudes by reinforcing dominant trends in group behavior, such as responding to popular "creators" who already dominate media consumption. 

TikTok, in other words, plays a greater role in creating mentalities than reading minds. 

Rather than speculate on mind-reading, it's worth bearing in mind certain fundamental aspects of social media, including TikTok, aspects that have nothing to do with minds or mentalities.  

First, activities on social media likely could be taken over by machines. Viewing videos and "liking" them are activities that are well within the scope of software automation. Hence, the notion something needs to have a mind to participate is irrelevant.

Second, social media is a machine designed to arrive at a clear signal within the noise. Individual preference or interest or mentality is irrelevant to the machine's goal, namely, to sort behavior into clear categories. 

And lastly, no single individual has an identity or a mentality on social media. What is referred to as one's persona, one's mind, one's identity, are merely illusions, the consequence of a name being attached to activities that are stored in a database. 

People don't exist on social media even though they spend time - lots of time - using it. Hence, no person, no mind.

Featured

iPhone 15 Pro review: Prepare to be dazzledGenerative AI will far surpass what ChatGPT can do. Here's everything on how the tech advancesGoogle Pixel 8 vs. Google Pixel 8 Pro: Which one is right for you?The best USB-C cables for the iPhone 15: What the experts recommend
  • iPhone 15 Pro review: Prepare to be dazzled
  • Generative AI will far surpass what ChatGPT can do. Here's everything on how the tech advances
  • Google Pixel 8 vs. Google Pixel 8 Pro: Which one is right for you?
  • The best USB-C cables for the iPhone 15: What the experts recommend

tag-icon Горячие метки: По вопросам бизнеса Электронная торговля

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.