AI Unveiled: Bias, Copyright, and China’s Tech Ambitions
Recently, I had a curious encounter with Apple’s Siri. I said the word 'racist' seven or eight times, and to my surprise, it transformed into 'Trump.' This isn’t just a glitch—it’s a glaring alignment issue. It reveals a troubling truth about artificial intelligence, mainly when controlled by big tech corporations: hidden biases can undermine the everyday user. Despite AI’s prevalence today, most people have little exposure to its inner workings. They might accept such quirks as truth or an accurate reflection of reality—but they’re not. The bias originates from the programmers, who, perhaps thinking themselves clever, reveal their own leanings instead.
Time and again, we see AI systems reflecting not objective values but the subjective slant of their creators. Take it all with a grain of salt. That’s why Grok, built by xAI, aims to be different—grounded and unswayed.
In the next six to eight weeks, I plan to launch what I’m calling the 'Woke Alignment Index.' It will assess the value systems baked into various large language models (LLMs). These systems should remain neutral, grounded in objective fact—or at least be transparent enough to label on a spectrum: left, center, right, far-left, far-right. We deserve to know what we’re consuming, significantly when it’s misaligned with our values. Look at DeepSeek’s reluctance to address Tiananmen Square, for instance—actual events obscured by design. Knowledge fades quickly; I know someone who doesn’t recognize Ronald Reagan’s name. Two generations from now, what we take for granted could be buried, clouded by these tools. So, tread carefully—and enjoy this month’s summary!
Aligning AI With China's Authoritarian Value System
The rapid emergence of DeepSeek, a Chinese AI chatbot, has highlighted China's approach to integrating artificial intelligence within its authoritarian framework. DeepSeek's performance, comparable to that of leading Western models, underscores China's advancements in AI technology. However, the chatbot's deliberate omission of topics such as Tiananmen Square and Taiwan reflects the stringent censorship embedded within its design. This aligns with China's regulatory environment, which mandates that AI-generated content adhere to "Core Socialist Values" and avoid politically sensitive subjects.
Following a pivotal moment in 2017, when Chinese Go champion Ke Jie was defeated by Google's AlphaGo, the Chinese State Council unveiled a strategic plan to position China as a global leader in AI by 2030. This plan emphasizes technological advancement and the establishment of ethical guidelines and legal frameworks to ensure AI systems reinforce the Communist Party's ideology. Consequently, AI applications like DeepSeek are engineered to align with state directives, ensuring that content remains within the boundaries set by the government. This approach exemplifies China's broader strategy of leveraging AI to bolster its socio-political objectives while maintaining strict control over information dissemination.
Sprick, D. (2025, February 3). Aligning AI With China's Authoritarian Value System. *The Diplomat*. https://thediplomat.com/2025/02/aligning-ai-with-chinas-authoritarian-value-system/
#AI #China #DeepSeek #Censorship #Authoritarianism
Meta Accused of Using 81.7TB of Pirated Books to Train AI
Meta Platforms faces legal scrutiny following allegations that it used over 81.7 terabytes of pirated books to train its AI models. Internal documents reveal that Meta’s research division sourced datasets from Library Genesis (LibGen), a well-known repository of unauthorized books, to develop Llama AI models. Despite internal discussions on the potential copyright risks, executives approved using this data, reportedly justifying it under the fair use doctrine.
Authors, including Sarah Silverman and Richard Kadrey, have filed lawsuits against Meta, alleging that the company’s AI training practices infringe on their intellectual property rights and undermine the publishing industry. The controversy highlights broader ethical concerns surrounding AI development, particularly the legality of using copyrighted material for machine learning.
This case is expected to set an important precedent in copyright law for AI-generated content. While Meta argues that the datasets were used in compliance with existing legal frameworks, critics say that training AI on unlicensed books represents large-scale copyright infringement. The outcome of these lawsuits could reshape regulations governing AI training practices and intellectual property protection in the digital age.
Brown, D. (2025, February 6). Meta torrented over 81.7TB of pirated books to train AI, authors say. Ars Technica. https://arstechnica.com/tech-policy/2025/02/meta-torrented-over-81-7tb-of-pirated-books-to-train-ai-authors-say/
#Meta #Copyright #AI #Piracy #IntellectualProperty
Artists Demand Cancellation of AI Art Auction Over Copyright Concerns
A group of over 3,000 artists, including high-profile illustrators and painters, has signed a petition demanding the cancellation of the upcoming AI art auction at London's prestigious Wren Gallery. The artists argue that many AI-generated works set to be auctioned were created using models trained on their original artworks without consent, constituting widespread copyright infringement. The event, organized by tech startup GenVision, has drawn backlash for selling pieces allegedly derived from datasets containing works by well-known contemporary and classical artists.
Among the most vocal critics is artist Mia Rivas, whose distinctive surrealist paintings appear to have influenced multiple AI-generated pieces in the auction catalog. Rivas stated that her signature style had been "scraped and remixed" without permission, diminishing her ability to control her artistic identity. Another affected artist, digital illustrator Ben Okada, discovered an AI-generated piece similar to a private commission he had never shared online.
The controversy has reignited broader debates over the ethics of AI-generated art and the lack of legal protections for artists whose work is used to train these systems. The petition calls for legislation requiring AI companies to seek explicit consent before using copyrighted works for machine learning. While GenVision has defended its practices by citing fair use, legal experts suggest the case could set a precedent for AI-related copyright disputes.
The Guardian. (2025, February 10). Mass theft: Thousands of artists call for AI art auction to be cancelled. The Guardian. https://www.theguardian.com/technology/2025/feb/10/mass-theft-thousands-of-artists-call-for-ai-art-auction-to-be-cancelled
#Art #AI #Copyright #Ethics #Artists
Thomson Reuters Wins AI Copyright 'Fair Use' Ruling
A federal judge in Delaware has ruled that Ross Intelligence, a now-defunct legal research firm, violated U.S. copyright law by copying content from Thomson Reuters' Westlaw to develop an AI-powered legal platform. U.S. Circuit Judge Stephanos Bibas determined that Ross's use of Westlaw's editorial content did not qualify as fair use, marking the first U.S. decision on fair use in AI-related copyright cases.
This ruling carries significant implications for tech companies like OpenAI, Microsoft, and Meta Platforms, which rely on fair use defenses in ongoing copyright cases involving AI training materials. These companies argue that generative AI systems use copyrighted material fairly by analyzing it to create new content. In contrast, copyright holders claim that such practices produce competing content that threatens their livelihoods.
The court's decision highlights the importance of obtaining proper authorization when using copyrighted materials to develop AI systems. This decision could shape future litigation in the rapidly evolving field of AI and intellectual property law.
Brittain, B. (2025, February 12). Thomson Reuters Wins AI Copyright 'Fair Use' Ruling. Insurance Journal. https://www.insurancejournal.com/news/national/2025/02/12/811765.htm
#AI #Copyright #Law #FairUse #ThomsonReuters
Comments
Post a Comment