AI Unveiled: Bias, Copyright, and China’s Tech Ambitions

Recently, I had a curious encounter with Apple’s Siri. I said the word 'racist' seven or eight times, and to my surprise, it transformed into 'Trump.' This isn’t just a glitch—it’s a glaring alignment issue. It reveals a troubling truth about artificial intelligence, mainly when controlled by big tech corporations: hidden biases can undermine the everyday user. Despite AI’s prevalence today, most people have little exposure to its inner workings. They might accept such quirks as truth or an accurate reflection of reality—but they’re not. The bias originates from the programmers, who, perhaps thinking themselves clever, reveal their own leanings instead. 

Time and again, we see AI systems reflecting not objective values but the subjective slant of their creators. Take it all with a grain of salt. That’s why Grok, built by xAI, aims to be different—grounded and unswayed. 

In the next six to eight weeks, I plan to launch what I’m calling the 'Woke Alignment Index.' It will assess the value systems baked into various large language models (LLMs). These systems should remain neutral, grounded in objective fact—or at least be transparent enough to label on a spectrum: left, center, right, far-left, far-right. We deserve to know what we’re consuming, significantly when it’s misaligned with our values. Look at DeepSeek’s reluctance to address Tiananmen Square, for instance—actual events obscured by design. Knowledge fades quickly; I know someone who doesn’t recognize Ronald Reagan’s name. Two generations from now, what we take for granted could be buried, clouded by these tools. So, tread carefully—and enjoy this month’s summary!

Aligning AI With China's Authoritarian Value System

The rapid emergence of DeepSeek, a Chinese AI chatbot, has highlighted China's approach to integrating artificial intelligence within its authoritarian framework. DeepSeek's performance, comparable to that of leading Western models, underscores China's advancements in AI technology. However, the chatbot's deliberate omission of topics such as Tiananmen Square and Taiwan reflects the stringent censorship embedded within its design. This aligns with China's regulatory environment, which mandates that AI-generated content adhere to "Core Socialist Values" and avoid politically sensitive subjects. 



Following a pivotal moment in 2017, when Chinese Go champion Ke Jie was defeated by Google's AlphaGo, the Chinese State Council unveiled a strategic plan to position China as a global leader in AI by 2030. This plan emphasizes technological advancement and the establishment of ethical guidelines and legal frameworks to ensure AI systems reinforce the Communist Party's ideology. Consequently, AI applications like DeepSeek are engineered to align with state directives, ensuring that content remains within the boundaries set by the government. This approach exemplifies China's broader strategy of leveraging AI to bolster its socio-political objectives while maintaining strict control over information dissemination.

Sprick, D. (2025, February 3). Aligning AI With China's Authoritarian Value System. *The Diplomat*. https://thediplomat.com/2025/02/aligning-ai-with-chinas-authoritarian-value-system/

#AI #China #DeepSeek #Censorship #Authoritarianism

Meta Accused of Using 81.7TB of Pirated Books to Train AI

Meta Platforms faces legal scrutiny following allegations that it used over 81.7 terabytes of pirated books to train its AI models. Internal documents reveal that Meta’s research division sourced datasets from Library Genesis (LibGen), a well-known repository of unauthorized books, to develop Llama AI models. Despite internal discussions on the potential copyright risks, executives approved using this data, reportedly justifying it under the fair use doctrine.


Authors, including Sarah Silverman and Richard Kadrey, have filed lawsuits against Meta, alleging that the company’s AI training practices infringe on their intellectual property rights and undermine the publishing industry. The controversy highlights broader ethical concerns surrounding AI development, particularly the legality of using copyrighted material for machine learning.

This case is expected to set an important precedent in copyright law for AI-generated content. While Meta argues that the datasets were used in compliance with existing legal frameworks, critics say that training AI on unlicensed books represents large-scale copyright infringement. The outcome of these lawsuits could reshape regulations governing AI training practices and intellectual property protection in the digital age.

Brown, D. (2025, February 6). Meta torrented over 81.7TB of pirated books to train AI, authors say. Ars Technica. https://arstechnica.com/tech-policy/2025/02/meta-torrented-over-81-7tb-of-pirated-books-to-train-ai-authors-say/

#Meta #Copyright #AI #Piracy #IntellectualProperty

Artists Demand Cancellation of AI Art Auction Over Copyright Concerns

A group of over 3,000 artists, including high-profile illustrators and painters, has signed a petition demanding the cancellation of the upcoming AI art auction at London's prestigious Wren Gallery. The artists argue that many AI-generated works set to be auctioned were created using models trained on their original artworks without consent, constituting widespread copyright infringement. The event, organized by tech startup GenVision, has drawn backlash for selling pieces allegedly derived from datasets containing works by well-known contemporary and classical artists.


Among the most vocal critics is artist Mia Rivas, whose distinctive surrealist paintings appear to have influenced multiple AI-generated pieces in the auction catalog. Rivas stated that her signature style had been "scraped and remixed" without permission, diminishing her ability to control her artistic identity. Another affected artist, digital illustrator Ben Okada, discovered an AI-generated piece similar to a private commission he had never shared online.

The controversy has reignited broader debates over the ethics of AI-generated art and the lack of legal protections for artists whose work is used to train these systems. The petition calls for legislation requiring AI companies to seek explicit consent before using copyrighted works for machine learning. While GenVision has defended its practices by citing fair use, legal experts suggest the case could set a precedent for AI-related copyright disputes.

The Guardian. (2025, February 10). Mass theft: Thousands of artists call for AI art auction to be cancelled. The Guardian. https://www.theguardian.com/technology/2025/feb/10/mass-theft-thousands-of-artists-call-for-ai-art-auction-to-be-cancelled

#Art #AI #Copyright #Ethics #Artists

Thomson Reuters Wins AI Copyright 'Fair Use' Ruling

A federal judge in Delaware has ruled that Ross Intelligence, a now-defunct legal research firm, violated U.S. copyright law by copying content from Thomson Reuters' Westlaw to develop an AI-powered legal platform. U.S. Circuit Judge Stephanos Bibas determined that Ross's use of Westlaw's editorial content did not qualify as fair use, marking the first U.S. decision on fair use in AI-related copyright cases.



Thomson Reuters, the parent company of Reuters News, welcomed the ruling, emphasizing that Westlaw's editorial content, created by attorney editors, is protected by copyright and cannot be used without consent. Ross Intelligence has not yet responded to requests for comment.

This ruling carries significant implications for tech companies like OpenAI, Microsoft, and Meta Platforms, which rely on fair use defenses in ongoing copyright cases involving AI training materials. These companies argue that generative AI systems use copyrighted material fairly by analyzing it to create new content. In contrast, copyright holders claim that such practices produce competing content that threatens their livelihoods.


The court's decision highlights the importance of obtaining proper authorization when using copyrighted materials to develop AI systems. This decision could shape future litigation in the rapidly evolving field of AI and intellectual property law.

Brittain, B. (2025, February 12). Thomson Reuters Wins AI Copyright 'Fair Use' Ruling. Insurance Journal. https://www.insurancejournal.com/news/national/2025/02/12/811765.htm

#AI #Copyright #Law #FairUse #ThomsonReuters

Chinese Electric Vehicle Manufacturers Expand into Humanoid Robotics

As of 2025, Chinese electric vehicle companies are leveraging their technical expertise and supply chain advantages to venture into the humanoid robotics sector. This strategic move aims to diversify revenue streams and capitalize on the growing demand for advanced robotics.


Industry Leaders and Developments

XPeng Motors unveiled its first humanoid robot, Iron, during its AI Day event in November 2024. At five feet eight inches tall and weighing 154 pounds, Iron features over 60 joints with 200 degrees of freedom, enabling complex movements. The robot has been integrated into XPeng's production lines to assemble the upcoming P7 plus model and is also utilized in the company's factories and stores.

BYD Auto has introduced 500 humanoid robots, Walker S1, into its factories to address labor shortages. Developed by UBTech, these robots perform tasks such as visual inspections, carrying heavy loads, and assembling parts. This initiative aims to mitigate China's projected 30 million manufacturing workers deficit by 2025.

GAC Group, a state-owned automotive giant, has developed the GoMate robot, showcasing its capabilities in humanoid robotics. This development highlights the company's commitment to integrating advanced robotics into its operations.

Government Support and Future Outlook

The Chinese government actively supports the integration of humanoid robots into various industries, including automotive manufacturing. Policies and substantial funding, such as a $1.4 billion fund in Beijing and Shanghai, are in place to promote advancements in robotics. This support is expected to accelerate the adoption of humanoid robots in production lines, addressing labor shortages and enhancing efficiency.

By leveraging their existing technical know-how and supply chain networks, Chinese electric vehicle manufacturers are well-positioned to lead in the emerging humanoid robotics sector, which could potentially transform both the automotive and robotics industries.

Technology Review. (2025, February 14). China's Electric Vehicle Giants Pivot to Humanoid Robots. Technology Review. https://www.technologyreview.com/2025/02/14/1111920/chinas-electric-vehicle-giants-pivot-humanoid-robots/

#Robotics #ElectricVehicles #China #Innovation #Manufacturing

OpenAI Thwarts Chinese Spy Tools Using ChatGPT in 2025

In February 2025, OpenAI banned ChatGPT accounts linked to Chinese threat actors to prevent the misuse of espionage tools. The "Peer Review" operation utilized ChatGPT to debug code for AI-driven surveillance, analyzing social media platforms like X and Facebook for anti-China sentiment. This highlights a broader clampdown on adversarial AI exploitation by nations such as China and North Korea.



The banned accounts, active during Chinese business hours, also researched dissident groups and translated documents, intending to relay protest data to Chinese authorities. While these tools used Meta’s Llama model, ChatGPT enhanced their capabilities. OpenAI’s actions underscore its role in mitigating state-backed cyber threats amid escalating U.S.-China tech tensions.

In addition to surveillance efforts, OpenAI disrupted a separate Chinese initiative, generating anti-U.S. Spanish articles related to the "Spamouflage" campaign for Latin American media. With 400 million weekly users, ChatGPT remains a prime target for misuse, leading OpenAI to strengthen defenses and share intelligence with tech partners.

OpenAI. (2025, February 24). OpenAI bans ChatGPT accounts used by Chinese group for spy tools. SecurityWeek. https://www.securityweek.com/openai-bans-chatgpt-accounts-used-by-chinese-group-for-spy-tools/

#AI #Security #China #Spyware #Tech

DeepSeek Upends Western AI Dominance with Cost-Cutting Models

On February 26, 2025, Qi Xiangdong, chairman of Qi An Xin Technology Group, said China’s DeepSeek is shaking the foundations of Western AI supremacy by delivering high-performance models at a fraction of U.S. costs. The Hangzhou-based startup’s R1 model, developed for just $6 million using Nvidia’s chips, rivals OpenAI’s o1, leading to a surge in affordable large-model applications. This shift challenges the West’s high-investment approach and reduces training costs by up to 95%.


DeepSeek’s open-source strategy—unveiling models like V3 and R1—has ignited a wave of AI innovation, particularly in China. While OpenAI invests billions in proprietary systems, DeepSeek maximizes efficiency by utilizing older chips and intelligent algorithms, reportedly cutting power requirements by 90%. Xiangdong emphasizes that this democratizes AI, driving growth in sectors like cybersecurity, where his company excels, as smaller players adopt these technologies.

According to Reuters, the West feels the impact as DeepSeek’s emergence wipes out $600 billion from Nvidia’s market cap in a single day. With China’s AI market booming—housing over 1,200 low-altitude firms alone—this low-cost revolution could reshape the global tech landscape. However, U.S. sanctions on chips may still restrict DeepSeek’s hardware advantage, although its software ingenuity continues to shine.

Global Times. (2025, February 26). DeepSeek shakes West’s dominance over AI development, triggering explosion of large models with lower costs: Cybersecurity firm chairman. https://www.globaltimes.cn/page/202502/1329127.shtml

#AI #DeepSeek #China #Cost #Innovation

Meta’s Project Aria Gen 2: Smart Glasses Redefine AI Research

Meta unveiled Project Aria Gen 2 on February 27, 2025, marking a significant advancement in smart glasses for egocentric AI research. These lightweight, all-day wearable glasses are equipped with a PPG sensor for heart rate monitoring, a contact microphone for voice isolation, and an 8-hour battery, enabling the collection of rich data from the wearer’s perspective. Designed to enhance AR and robotics, they are now available to third-party researchers, promising to accelerate innovation in context-aware technology.



The upgrade from the 2020 Aria model emphasizes multi-modal data collection—including eye tracking, hand gestures, and speech—which is crucial for training AI to comprehend human environments. Weighing only 75 grams and featuring noise-canceling speakers, they are built for comfort and precision, addressing real-world challenges like indoor navigation and personalized AR. Privacy tools like EgoBlur enhance the offering, although adoption challenges and data ethics remain significant concerns.

This initiative could transform the AR landscape, with Meta’s open-access approach potentially igniting a research surge. As China advances in AI cost efficiency, Aria Gen 2 positions Meta to compete with state-of-the-art hardware for machine perception. The glasses’ capabilities in robotics and beyond are exciting, but their success depends on how laboratories worldwide adopt this next-generation tool.

Meta. (2025, February 27). Project Aria Gen 2: Next-generation egocentric research glasses from Reality Labs for AI and robotics. https://www.meta.com/blog/project-aria-gen-2-next-generation-egocentric-research-glasses-reality-labs-ai-robotics/

#AI #AR #Glasses #Research #Tech

Comments

Popular posts from this blog

Tracking AI Investment: Capital Formation in Artificial Intelligence from 2015 to 2050

Musk’s Empire Under Siege in 2025: Coordinated Hacks, Vandalism, Smears, and Market Mayhem

2025: The Year of the Agent – When AI Brains Meet Robot Bodies