Artificial Intelligence

Tweaking political content with AI? Govt, Big Tech are watching

[ad_1]

Union IT minister Vaishnaw confirmed the meetings, in response to Mint’s queries.

Since participating in these meetings, Google and Meta have both published notes on tackling AI-altered content and advertisements on intermediary, search and conversational AI platforms, which include ChatGPT, Facebook, Gemini, Google Search, Instagram, WhatsApp and YouTube, among others. Each of these firms were recommended to take a “precautionary” approach to information generated by AI, including clearly labelling such content on political advertisements, and restricting AI’s ability to produce search results on key political personalities, parties, or any opinion related to the upcoming 2024 general elections.

US-headquartered firm Adobe—which runs Photoshop, one of the world’s largest creative visualization tools—has taken a similarly careful approach towards how its generative tool, Firefly, can be used to manipulate or create imagery that could be used in political campaigns, said Andy Parsons, senior director of Adobe’s content authenticity initiative, in an interview with Mint.

The Centre also discussed how the above-mentioned intermediaries (barring Adobe), which enjoy safe harbour protection as defined by the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, may be held liable for prosecution, if they fail to curb AI-aided misinformation from spreading widely across internet platforms. The discussions were held in light of a steady proliferation of AI in commonplace content across the internet, leaving top tech firms scrambling to adopt techniques such as watermarking and tagging of metadata, all of the officials cited above said.

The ability of the Centre to urge censoring of specific keywords comes amid a “better understanding of the impact that AI can have in public discourse,” said Adobe’s Parsons. “Governments are understanding that there is no single silver-bullet solution to what could prevent misinformation, and are only now coming to realize how the Munich Accord could impact Big Tech and elections. This could help in government decision-making on how democracies like India can take on sensitive AI-driven misinformation.” 

Signed on 16 February, 20 companies including Adobe, Google, Meta, Microsoft, OpenAI and X (formerly Twitter) signed the ‘Tech Accord to Combat Deceptive Use of AI in 2024 Elections’. The accord, signed at the Munich Security Conference, proposed “implementing technology to mitigate risks related to deceptive AI election content, assessing AI models…to understand risks, detect distribution and fostering cross-industry resilience,” as part of eight key agreements between tech firms.

Google and Meta’s election-specific strategy disclosures, published on 12 March and 19 March, respectively, elaborate further on the accord. In a post attributed to ‘Google India Team’, the tech firm stated that it will disclose when AI is used in political ads, label AI-generated content on YouTube, and use digital watermarking to identify modified content. On its generative AI platform Gemini, the post said, “We have begun to roll out restrictions on the types of election-related queries for which Gemini will return responses.”

Meta, in its post, said that the company is operating 12 fact-check teams that will independently verify AI-generated content, and altered political content would be restricted across its platforms. “Once a piece of content is rated as ‘altered’, or we detect it as near identical, it appears lower on Facebook. We also dramatically reduce the content’s distribution. On Instagram, altered content is featured less prominently in ‘feed’ and ‘stories’. This significantly reduces the number of people who see it,” the post said.

Neither of the two companies responded to Mint’s emailed queries seeking details on their meetings with Meity officials and ministers.

Senior legal and policy experts said that existing clauses under both the IT Rules, 2021 and the Indian Penal Code (IPC) could be applicable to both Big Tech as an enterprise, as well as users promoting such content—depending upon the issues at hand.

“If an intermediary faces a court order due to their inaction towards proactively curbing AI-driven misinformation on their platforms, they will be liable to face Rule 7 of the IT Rules, 2021—which will therefore turn the tackling of election-time AI misinformation from a responsibility to a liability for these companies,” said NS Nappinai, senior counsel at Supreme Court and founder of Cyber Saathi Foundation.

A senior partner at a top law firm, who requested anonymity since the firm represents one or more of the Big Tech enterprises mentioned here, added that a key challenge to effectively curbing AI threats comes due to “an umbrella definition of intermediaries.”

“There is a lack of clear definition of platforms and intermediaries, which leaves our regulatory mechanism with a broad-brushed approach in terms of whom the responsibility and liability lies with. This could pose a challenge in effective, urgent curbing of AI content during the election period,” the lawyer said.

Rule 7, as cited above, states that if a company fails to conduct due diligence of curbing identity infringement or manipulation of various forms, they will be “liable for punishment under any law for the time being in force including the provisions of the Act, and the Indian Penal Code.”

Kazim Rizvi, founding director of policy think-tank The Dialogue, added that effective penalization, a step that could help curb misinformation, would need “greater efforts towards enforcement of existing legal frameworks, rather than an overemphasis on creating new regulations.”

“The current legislative environment already provides a comprehensive foundation to address deepfakes, including Rule 3(1)(b) of the IT Rules, 2021. The nature of synthetic media is not inherently detrimental, and holds significant potential in fields like education, content creation, crime prevention and government programme awareness. Over-regulation might inadvertently constrain these positive applications, thereby diminishing the broader benefits of AI-based technological advancements. The key, therefore, lies in seamlessly operationalizing the existing legal structures, enhancing the capabilities of law enforcement, ensuring that platforms comply with the regulations, and educating the public about their role in identifying and reporting deepfakes, effectively creating a more aware and proactive digital community,” Rizvi added.

[ad_2]

Source link

Back to top button