>
Fa   |   Ar   |   En
   سازوکارهای پالایش محتوا درپلتفرم‌های ویدئویی؛ مطالعه موردی یوتیوب  
   
نویسنده حسنی حسین
منبع مطالعات فضاي مجازي و رسانه‌هاي اجتماعي - 1403 - دوره : 1 - شماره : 1 - صفحه:47 -84
چکیده    هدف این مقاله، مطالعه نحوۀ پالایش محتواهای زیان­آور در پلتفرم­ کاربرمحور و ویدئویی یوتیوب است. امروزه موضوع حکمرانی و تنظیم‌گری محتوایی پلتفرم‌ها به یک موضوع چالش‌برانگیز تبدیل شده است. این مسئله به‌ویژه در پلتفرم‌هایی که مبتنی‌بر ویدئو فعالیت می‌کنند، چالش‌های بیشتری ایجاد کرده است. پخش زنده محتوای ویدئویی و نیز انواع محتواهای ویدئویی کذب همانند جعل عمیق که در پلتفرم‌های ویدئو، مبنا به اشتراک گذاشته می‌شوند، مقابله و جلوگیری از انتشار محتواهای ویدئویی زیان‌آور، غیرقانونی و غیراخلاقی را دشوارتر کرده است. این تحقیق با استفاده از روش مطالعۀ موردی و تحلیل محتوای کیفی، خط‌مشی‌های یوتیوب، دستورالعمل‌های انجمن و کردارهای عملی تعدیل محتوا در یوتیوب را بررسی می‌کند تا مشخص شود این پلتفرم چگونه محتوای تولیدشده توسط کاربر را پالایش می‌کند. نتایج این مقاله در کل، بیانگر پیچیدگی نظارت برمحتوای ویدئویی، کاربرد روزافزون هوش مصنوعی برای تعدیل محتوای ویدئویی و مخاطرات ناشی از آن است. برخلاف تصور عموم و برخی سیاست‌گذاران در ایران، پلتفرم‌های جهانی سازوکارهای پیچیده و دائماً به‌روزشونده‌ای را برای حفاظت از اجتماع کاربران خود ایجاد کرده‌اند تا فضای امنی را برای تعامل کاربران و تداوم سودآوری و حفظ وجۀ خود به وجود آورند. همچنین با توجه به اینکه پلتفرم‌‌های بومی در ایران در حال توسعه هستند، سازوکار تعدیل محتوا در پلتفرم یوتیوب جهانی یوتیوب می‌تواند الگویی برای پلتفرم‌های ایرانی باشد.
کلیدواژه پلتفرم ویدئویی، تعدیل محتوا، حکمرانی پلتفرم، یوتیوب
آدرس پژوهشگاه فرهنگ، هنر و ارتباطات, ایران
پست الکترونیکی hassani@ricac.ac.ir
 
   content moderation mechanisms in video platforms: a case study of youtube  
   
Authors hassani hossein
Abstract    abstractintroduction: today, a significant portion of platformized content on the internet (flew, 2021) is produced in video format. the monitoring and regulation of con-tent across various user-generated content-sharing platforms has introduced a new landscape for media content governance. one of the darker aspects of this shift is the facilitation of producing and disseminating various forms of inappropriate content, ranging from misleading information to harmful and disruptive videos. youtube has become one of the most popular content-sharing platforms glob-ally, attracting a diverse, international user base. the rapid advancement of new technologies, such as artificial intelligence, in the distribution of disruptive video content has made filtering and blocking these types of content more challenging than ever before. accordingly, the primary focus of this paper is to examine the mechanisms employed by youtube, a leading global video platform, in moderat-ing and filtering content.methods: this research adopts a qualitative approach, specifically utilizing a case study design. a case study is a research method that involves an in-depth and detailed examination of a specific subject, such as an individual, group, event, organization, or phenomenon (crave et al., 2011). case studies employ various data collection methods, such as interviews, observations, document analysis, and archival records, to gather comprehensive information on the topic of interest. the goal of a case study is to provide a contextualized, nuanced understanding of the subject, examine relationships among variables, identify patterns, and generate insights that may contribute to theoretical development or practical solutions. this study, conducted during 2023-2024, aims to offer a detailed and holistic anal-ysis of content moderation processes on youtube, with a specific focus on the platform’s handling of video content and the application of algorithmic moderation facilitated by artificial intelligence. the study includes an analysis of youtube’s documents and policies. data collection relied primarily on youtube’s own doc-uments and materials, such as community guidelines, various policies, published reports, and external studies and reports analyzing youtube’s content modera-tion. data analysis was performed through qualitative content analysis, with main categories extracted after reviewing various documents and further substantiated through extracted evidence.conclusion: content moderation has become an essential component of digital platforms, protecting users from harmful content while ensuring an inclusive and safe online environment. the challenge of managing vast volumes of user-generated content has led to the development of advanced moderation mechanisms that incorporate both artificial intelligence and human moderation efforts. ai-based moderation systems have demonstrated exceptional efficiency in identifying and flagging problematic content at scale. however, they are not without limitations, as they often struggle with contextual understanding and language nuances, which can sometimes lead to incorrect flagging and removal of lawful content. content moderation has both theoretical and practical dimensions. first, platforms must develop a comprehensive set of documents-such as terms of service, com-munity guidelines, privacy and safety policies, and misinformation policies-based on overarching documents, legal requirements, and platform-specific approaches. additionally, certain operational procedures should be established within the plat-form to clearly and unambiguously distinguish harmful from non-harmful content. human moderators play a critical role in addressing these limitations by bringing judgment and discernment to the moderation process. they are better equipped to understand the complexities of language, culture, and context, ensuring a more accurate content assessment.content moderation is a complex mechanism that spans guidelines and practical content filtering procedures. as new forms of disruptive content are increasingly created and shared, these guidelines must be regularly updated and remain as clear and precise as possible, allowing both human and machine moderators to operate with ease and clarity.to achieve this goal, iranian platforms should invest in advanced ai technolo-gy while fostering a supportive and empathetic environment for human modera-tors. additionally, iranian platforms should enhance transparency, accountability, and collaboration with users, industry stakeholders, and regulatory authorities to ensure that content moderation practices align with social values and legal re-quirements. this approach will ultimately create a safe and inclusive online en-vironment for iranian users while mitigating risks associated with user-generated content, particularly regarding video content.
Keywords platform governance ,content moderation ,video platform ,youtube.
 
 

Copyright 2023
Islamic World Science Citation Center
All Rights Reserved