Зарегистрируйтесь сейчас для лучшей персонализированной цитаты!

Digital Rights Watch defends social media platforms' efforts in removing terrorist content

16 ноября 2021 г Hi-network.com

Digital Rights Watch has defended social media platforms and their efforts to remove online abhorrent violent material, telling an Australian parliamentary committee that companies should not be expected to be always aware of all of this content at all times.

"I'm not sure what we gain by doing the sort of pressure on companies to wholesale remove all this content all the time," Digital Rights Watch programme and partnership director Lucie Krahulcova said, appearing before Australia's Parliamentary Joint Committee on Law Enforcement.

"I don't think it removes the societal issues, it doesn't remove the socioeconomic issues, and it certainly doesn't remove the violence that happens around the world."

The comments were made in relation to Australia'sCriminal Code Amendment (Sharing of Abhorrent Violent Material) Act 2019 (AVM Act), which is currently being reviewed by the joint committee.

The AVM Act requires content service providers to remove abhorrent violent material and notify police if their services are used to share abhorrent violent material, or they risk being fined 10% of their annual global turnover. It also gives the eSafety commissioner power to issue notices to content service providers and order them to remove specific abhorrent violent material.

Krahulcova also told the committee that there is a difference between moving quickly to remove live-streamed terrorist content and removing violent content about an ongoing crisis.

She warned that laying excessive penalties against companies for having violent material on their social media platforms runs the risk of several decades worth of footage from activists and journalists being covered up to avoid regulatory backlash. In explaining this concern, she referred to YouTube's automated tools removing videos covering the war crimes and violence in Syria.

Digital Rights Watch was not the only organisation who appeared before the joint committee, with representatives from industry body Communications Alliance, Meta, Twitter, and Snap also present on Wednesday.

Meta APAC VP of public policy Simon Milner told the committee that it liked the idea for all companies to be held accountable for addressing harmful content on their online platforms, which could include making it mandatory for companies to use proactive detection technology.

While Milner said he was happy for government to consider legislating these types of measures, he acknowledged that not all companies have the capacity to implement them. He did provide a caveat, however, that all companies should still have the means to measure the prevalence of abhorrent violent material on their online platforms and have ways to address this content.

Milner also said the need for companies to proactively monitor for abhorrent violent material may change once the Online Safety Act comes into effect early next year, as the law provides the eSafety Commissioner more powers to hold services accountable and creates new schemes that compel the removal or blocking of "harmful" or abhorrent violent material through take-down powers.

"That's going to include some codes and frameworks that will encourage companies to take some of those corrective steps in relation to certain classes of material," Milner told the committee. 

A coalition of organisations -- which included those who appeared before the committee on Wednesday -- came together a fortnight ago to recommend various amendments to Australia's online abhorrent violent materials laws.

The organisations jointly proposed various amendments to the AVM Act ranging from more review processes to clarification around definitions to lowered penalties.

Chief among the legislative amendments recommended by the coalition is more clarity regarding when the law's monitoring obligations for removing abhorrent violent material are triggered. The coalition said many organisations are confused as guidance from the Attorney-General's department has indicated that there is no obligation to proactively monitor for abhorrent violent material, and that there is only a requirement to remove such material when it is found.

RELATED COVERAGE

  • Canberra asks big tech to introduce detection capabilities in encrypted communication
  • Australian government prefers education over prosecution to deter cyberbullying
  • House passes Online Safety Act as Senate opposes 'big tech' influence committee
  • Senate committee recommends 'rushed' Online Safety Bill be passed
  • Twitter and Twitch added to list of those concerned with Australia's Online Safety Bill
  • Australian Online Privacy Bill to make social media age verification mandatory for tech giants, Reddit, Zoom, gaming platforms

tag-icon Горячие метки: По вопросам бизнеса Социальные медиа

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.