Social Networks

Meta Launches Vibes, an Endless Feed of AI Slop for Your Viewing Displeasure (fb.com) 30

Meta has rolled out Vibes, an endless feed of AI-generated videos within its Meta AI app and meta.ai website. Users can create short-form synthetic videos from scratch or remix existing AI content from the feed, adding music and adjusting styles before redistributing the artificial output to Instagram, Facebook Stories and Reels. The feed promises to become "more personalized over time" as it learns user preferences for machine-generated content. Meta positioned the feature as part of its broader AI video strategy, adding another stream of synthetic media to platforms already saturated with algorithmic content. The company says additional AI creation tools are coming.
United States

Did the US Successfully Take Over TikTok, Or Not? (apnews.com) 58

Longtime Slashdot reader hackingbear writes: President Donald Trump signed an executive order Thursday that he says will allow TikTok to continue operating in the United States in a way that meets national security concerns. Trump's order will enable an American-led of group of investors to "buy the app" (up to 80% ownership) from China's ByteDance, though the deal is not yet finalized and also requires China's approval. However, much about the deal is still unknown. So, did the U.S. successfully snatch TikTok from ByteDance? It is probably up to individual's interpretation.

As with any deals between U.S. and China, the devil is in the details. According Shen Yi, an internet influencer and a professor at Shanghai's Fudan University, what the U.S. investor will eventually take control of is an entity known as TikTok U.S. Data Security Company ("USDS"), which is a subsidiary of TikTok U.S. and is exclusively responsible to handle data security in the U.S.. ByteDance will continue, through its U.S. subsidiary "ByteDance TikTok U.S. Company," to operate business and other related activities (such as e-commerce, advertising for brands, and cross-border commercial activities). It is important to stress that "Byte TikTok U.S. Company" remains 100% owned by ByteDance through its global TikTok subsidiary -- this arrangement has not changed. The TikTok algorithm remains the property of ByteDance, only licensed to USDS for use. This point was in fact explicitly clarified by a relevant official of China's Cyberspace Administration at the press conference following the Madrid talks.

After reaching the TikTok deal, Beijing and Washington are now selling it to their respective domestic audience, each highlighting the part of the deal that it can characterize as a win. Shen's details are not in conflict with the widely-reported account given by Karoline Leavitt, the White House Press Secretary, who emphasized "a new board with six American directors out of seven." Observers can also find the TikTok arrangement being very similar to that of Apple's iCloud operation in China being run by GCBD (AIPO Cloud (Guizhou) Technology Co. Ltd.) while Apple retain controls of the brand and business.

AI

OpenAI Launches ChatGPT Pulse To Proactively Write You Morning Briefs 18

OpenAI introduced Pulse, a new ChatGPT feature that generates five to ten personalized daily reports overnight for Pro users on its $200/month plan. The goal is to eventually expand beyond summaries to agent-like tasks. TechCrunch reports: Pulse offers users five to 10 briefs that can get them up to speed on their day and is aimed at encouraging users to check ChatGPT first thing in the morning -- much like they would check social media or a news app. "We're building AI that lets us take the level of support that only the wealthiest have been able to afford and make it available to everyone over time," said OpenAI's new CEO of Applications, Fidji Simo, in a blog post. "And ChatGPT Pulse is the first step in that direction -- starting with Pro users today, but with the goal of rolling out this intelligence to all."

Starting Thursday, OpenAI will roll out Pulse for subscribers to its $200-a-month Pro plan, for whom it will appear as a new tab in the ChatGPT app. The company says it would like to launch Pulse to all ChatGPT users in the future, with Plus subscribers to get access soon, but it first needs to make the product more efficient. Pulse's reports can be roundups of news articles on a specific topic -- like updates on a specific sports team -- as well as more personalized briefs based on a user's context.
Facebook

Facebook Data Reveal the Devastating Real-World Harms Caused By the Spread of Misinformation (theconversation.com) 174

An anonymous reader quotes a report from The Conversation: Twenty-one years after Facebook's launch, Australia's top 25 news outlets now have a combined 27.6 million followers on the platform. They rely on Facebook's reach more than ever, posting far more stories there than in the past. With access to Meta's Content Library (Meta is the owner of Facebook), our big data study analysed more than three million posts from 25 Australian news publishers. We wanted to understand how content is distributed, how audiences engage with news topics, and the nature of misinformation spread. The study enabled us to track de-identified Facebook comments and take a closer look at examples of how misinformation spreads. These included cases about election integrity, the environment (floods) and health misinformation such as hydroxychloroquine promotion during the COVID pandemic. The data reveal misinformation's real-world impact: it isn't just a digital issue, it's linked to poor health outcomes, falling public trust, and significant societal harm. [...]

Our study has lessons for public figures and institutions. They, especially politicians, must lead in curbing misinformation, as their misleading statements are quickly amplified by the public. Social media and mainstream media also play an important role in limiting the circulation of misinformation. As Australians increasingly rely on social media for news, mainstream media can provide credible information and counter misinformation through their online story posts. Digital platforms can also curb algorithmic spread and remove dangerous content that leads to real-world harms. The study offers evidence of a change over time in audiences' news consumption patterns. Whether this is due to news avoidance or changes in algorithmic promotion is unclear. But it is clear that from 2016 to 2024, online audiences increasingly engaged with arts, lifestyle and celebrity news over politics, leading media outlets to prioritize posting stories that entertain rather than inform. This shift may pose a challenge to mitigating misinformation with hard news facts. Finally, the study shows that fact-checking, while valuable, is not a silver bullet. Combating misinformation requires a multi-pronged approach, including counter-messaging by trusted civic leaders, media and digital literacy campaigns, and public restraint in sharing unverified content.

Cellphones

Japanese City Passes Two-Hours-a-Day Smartphone Usage Ordinance (theregister.com) 29

The Japanese city of Toyoake has passed (PDF) a symbolic ordinance limiting recreational smartphone use to two hours a day, aiming to improve citizens' sleep -- especially for students after summer vacation. The Register reports: "The primary purpose of this ordinance is to ensure that all citizens receive adequate sleep," states a Council information page, which explains that many Japanese people ignore Ministry of Health, Labor and Welfare recommendations to spend six to eight hours a day dozing. An accompanying FAQ [PDF] explains that Council passed the ordinance because students who return to school after summer vacations sometimes need a nudge the re-establish an appropriate daily regime.

The ordinance also points out "Excessive phone users and their families are facing difficulties in their daily and social lives," and suggests the two-hours-a-day guidance might help. Council's documents point out that smartphones have myriad uses beyond recreation, and that the ordinance should not be taken as a suggestion to reduce overall use of the devices. Toyoake is part of the Nagoya megalopolis and is home to around 70,000 people. The town's government plans to survey residents about the ordinance, and the FAQ also mentions it wants to tackle other digital menaces, among them harmful effects of using smartphones while walking.

The Almighty Buck

Neon Pays Users To Record Their Phone Calls, Sell Data To AI Firms 34

Neon Mobile, now the No. 2 social networking app in Apple's U.S. App Store, pays users up to $30 per day to record their phone calls and sell the data to AI companies. The app claims to only capture one side of a call unless both parties use Neon, but its terms grant sweeping rights over recordings. TechCrunch reports: The app, Neon Mobile, pitches itself as a money-making tool offering "hundreds or even thousands of dollars per year" for access to your audio conversations. Neon's website says the company pays 30 cents per minute when you call other Neon users and up to $30 per day maximum for making calls to anyone else. The app also pays for referrals.

According to Neon's terms of service, the company's mobile app can capture users' inbound and outbound phone calls. However, Neon's marketing claims to only record your side of the call unless it's with another Neon user. That data is being sold to "AI companies," the company's terms of service state, "for the purpose of developing, training, testing, and improving machine learning models, artificial intelligence tools and systems, and related technologies."

Despite what Neon's privacy policy says, its terms include a very broad license to its user data, where Neon grants itself a: "...worldwide, exclusive, irrevocable, transferable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to sell, use, host, store, transfer, publicly display, publicly perform (including by means of a digital audio transmission), communicate to the public, reproduce, modify for the purpose of formatting for display, create derivative works as authorized in these Terms, and distribute your Recordings, in whole or in part, in any media formats and through any media channels, in each instance whether now known or hereafter developed." That leaves plenty of wiggle room for Neon to do more with users' data than it claims. The terms also include an extensive section on beta features, which have no warranty and may have all sorts of issues and bugs.
Peter Jackson, cybersecurity and privacy attorney at Greenberg Glusker, told TechCrunch: "Once your voice is over there, it can be used for fraud. Now, this company has your phone number and essentially enough information -- they have recordings of your voice, which could be used to create an impersonation of you and do all sorts of fraud."
China

Horror Film's Wedding Scene Digitally Altered for Chinese Audiences (theguardian.com) 47

Australian horror film Together, starring Dave Franco and Alison Brie, underwent digital alterations for its mainland China release on September 12. Chinese cinemagoers discovered that a wedding scene between two men had been modified using face-swapping technology to transform one male character into a female appearance. The change only became apparent after side-by-side screenshots from the original and altered versions circulated on social media platforms.

Chinese viewers are expressing outrage over the AI-powered modification, The Guardian reports, citing concerns about creative integrity and the difficulty of detecting such alterations compared to traditional scene cuts. The film's distributor halted the scheduled September 19 general release following the backlash. China's censorship authorities require all imported films to undergo approval before release.
Social Networks

3 Billion Users Now Use Instagram Monthly 32

CNBC: Instagram now has 3 billion monthly active users, Meta CEO Mark Zuckerberg said Wednesday on his Instagram account. "What an incredible community we've built here," Zuckerberg posted on his Instagram channel.

The figure is a major milestone for the photo-sharing app, which the social media company acquired in 2012 for $1 billion. Meta last disclosed Instagram's user figures in October 2022 when Zuckerberg said during an earnings call that the app had crossed 2 billion monthly users.
AI

Why AI Chatbots Can't Process Persian Social Etiquette 244

An anonymous reader quotes a report from Ars Technica: If an Iranian taxi driver waves away your payment, saying, "Be my guest this time," accepting their offer would be a cultural disaster. They expect you to insist on paying -- probably three times -- before they'll take your money. This dance of refusal and counter-refusal, called taarof, governs countless daily interactions in Persian culture. And AI models are terrible at it.

New research released earlier this month titled "We Politely Insist: Your LLM Must Learn the Persian Art of Taarof" shows that mainstream AI language models from OpenAI, Anthropic, and Meta fail to absorb these Persian social rituals, correctly navigating taarof situations only 34 to 42 percent of the time. Native Persian speakers, by contrast, get it right 82 percent of the time. This performance gap persists across large language models such as GPT-4o, Claude 3.5 Haiku, Llama 3, DeepSeek V3, and Dorna, a Persian-tuned variant of Llama 3.

A study led by Nikta Gohari Sadr of Brock University, along with researchers from Emory University and other institutions, introduces "TAAROFBENCH," the first benchmark for measuring how well AI systems reproduce this intricate cultural practice. The researchers' findings show how recent AI models default to Western-style directness, completely missing the cultural cues that govern everyday interactions for millions of Persian speakers worldwide.
"Cultural missteps in high-consequence settings can derail negotiations, damage relationships, and reinforce stereotypes," the researchers write.

"Taarof, a core element of Persian etiquette, is a system of ritual politeness where what is said often differs from what is meant," the researchers write. "It takes the form of ritualized exchanges: offering repeatedly despite initial refusals, declining gifts while the giver insists, and deflecting compliments while the other party reaffirms them. This 'polite verbal wrestling' (Rafiee, 1991) involves a delicate dance of offer and refusal, insistence and resistance, which shapes everyday interactions in Iranian culture, creating implicit rules for how generosity, gratitude, and requests are expressed."
AI

AI-Generated 'Workslop' Is Destroying Productivity (hbr.org) 48

40% of U.S. employees have received "workslop" -- AI-generated content that appears polished but lacks substance -- in the past month, according to research from BetterUp Labs and Stanford Social Media Lab. The survey of 1,150 full-time workers found recipients spend an average of one hour and 56 minutes addressing each incident of workslop, costing organizations an estimated $186 per employee monthly. For a 10,000-person company, lost productivity totals over $9 million annually.

Professional services and technology sectors are disproportionately affected. Workers report that 15.4% of received content qualifies as workslop. The phenomenon occurs primarily between peers at 40%, though 18% flows from direct reports to managers and 16% moves down the hierarchy. Beyond financial costs, workslop damages workplace relationships -- half of recipients view senders as less creative, capable, and reliable, while 42% see them as less trustworthy.
AI

LinkedIn Set To Start To Train Its AI on Member Profiles (techradar.com) 27

LinkedIn has said it will start using some member profiles, posts, resumes and public activity to train its AI models from November 3, 2025. From a report: Users are rightly frustrated with the change, with the biggest concern isn't the business networking platform will do so, but that it's set to be enabled by default, with users instead having to actively opt out. Users can choose to opt out via the 'data for generative AI improvement' setting, however it will only apply to data collected after they opt out, with data up until that point still retained within the training environment.
Social Networks

TikTok Algorithm To Be Retrained On US User Data Under Trump Deal (bbc.com) 37

The Trump administration has struck a deal requiring TikTok's algorithm to be copied, retrained, and operated in the U.S. using only U.S. user data, with Oracle auditing the system and U.S. investors forming a joint venture to oversee it. The BBC reports: It comes after President Donald Trump said a deal to prevent the app's ban in the US, unless sold by its Chinese parent company ByteDance, had been reached with China's approval. White House officials claim the deal will be a win for the app's US users and citizens. President Trump is expected to sign an executive order later this week on the proposed deal, which will set out how it will comply with US national security demands.

The order will also outline a 120-day pause to the enforcement deadline to allow the deal to close. It is unclear whether the Chinese government has approved this agreement, or begun to take regulatory steps required to deliver it. However, the White House appears confident it has secured China's approval. Data belonging to the 170m users TikTok says it has in the US is already held on Oracle servers, under an existing arrangement called Project Texas. It saw US user data siphoned off due to concerns it could fall into the hands of the Chinese government.

A senior White House official said that under President Trump's deal, the company would take on a comprehensive role in securing the entirety of the app for American users. They said this would include auditing and inspecting the source code and recommendation system underpinning the app, and rebuilding it for US users using only US user data.

AI

Reddit Wants 'Deeper Integration' with Google in Exchange for Licensed AI Training Data (msn.com) 30

Reddit's content became AI training data last year when Google signed a $60 million-per-year licensing agreement. But now Reddit is "in early talks" about a new deal seeking "deeper integration with Google's AI products," reports Bloomberg (citing executives familiar with the discussions).

And Reddit also wants "a deal structure that could allow for dynamic pricing, where the social platform can be paid more" — with both Google and OpenAI — to "adequately reflect how valuable their data has been to these platforms..." Such licensing agreements are becoming more common as AI companies seek legal ways to train their models. OpenAI has also struck a series of partnership agreements with major media publishers such as Axel Springer SE, Time and Conde Nast to use their content in ChatGPT...

Reddit remains among the most cited sources across AI platforms, according to analytics company Profound AI. However, Reddit executives have noticed that traffic coming from Google has limited value, as users seeking answers to a specific question often don't convert into becoming active Redditors, the people said. Now, Reddit is engaging with product teams at Google in hopes of finding ways to send more of its users deeper into its ecosystem of community forums, according to the executives. In return, Reddit is looking for ways to provide more high-quality data to its AI partners. Discussions between Reddit and Google have been productive, the people said. "We're midflight in our data licensing deals and still learning, but what we have seen is that Reddit data is highly cited and valued," Reddit Chief Operating Officer Jen Wong said on July 31 during a call with investors. "We'll continue to evaluate as we go."

AI

There Isn't an AI Bubble - There Are Three 76

Fast Company ran a contrarian take about AI from entrepreneur/thought leader Faisal Hoque, who argues there's three AI bubbles.

The first is a classic speculative bubble, with asset prices soaring above their fundamental values (like the 17th century's Dutch "tulip mania"). "The chances of this not being a bubble are between slim and none..." Second, AI is also arguably in what we might call an infrastructure bubble, with huge amounts being invested in infrastructure without any certainty that it will be used at full capacity in the future. This happened multiple times in the later 1800s, as railroad investors built thousands of miles of unneeded track to serve future demand that never materialized. More recently, it happened in the late '90s with the rollout of huge amount of fiber optic cable in anticipation of internet traffic demand that didn't turn up until decades later. Companies are pouring billions into GPUs, power systems, and cooling infrastructure, betting that demand will eventually justify the capacity. McKinsey analysts talk of a $7 trillion "race to scale data centers" for AI, and just eight projects in 2025 already represent commitments of over $1 trillion in AI infrastructure investment. Will this be like the railroad booms and busts of the late 1800s? It is impossible to say with any kind of certainty, but it is not unreasonable to think so.

Third, AI is certainly in a hype bubble, which is where the promise claimed for a new technology exceeds reality, and the discussion around that technology becomes increasingly detached from likely future outcomes. Remember the hype around NFTs? That was a classic hype bubble. And AI has been in a similar moment for a while. All kinds of media — social, print, and web — are filled with AI-related content, while AI boosterism has been the mood music of the corporate world for the last few years. Meanwhile, a recent MIT study reported that 95% of AI pilot projects fail to generate any returns at all.

But the article ultimately argues there's lessons in the 1990s dotcom boom: that "a thing can be hyped beyond its actual capabilities while still being important... When valuations correct — and they will — the same pattern will emerge: companies that focus on solving real problems with available technology will extract value before, during, and after the crash." The winners will be companies with systematic approaches to extracting value — adopting mixed portfolios with different time horizons and risk levels, while recognizing organizational friction points for a purposeful (and holistic) integration.

"The louder the bubble talk, the more space opens for those willing to take a methodical approach to building value."

Thanks to Slashdot reader Tony Isaac for sharing the article.
AI

Is OpenAI's Video-Generating Tool 'Sora' Scraping Unauthorized YouTube Clips? (msn.com) 18

"OpenAI's video generation tool, Sora, can create high-definition clips of just about anything you could ask for..." reports the Washington Post.

"But OpenAI has not specified which videos it grabbed to make Sora, saying only that it combined 'publicly available and licensed data'..." With ChatGPT, OpenAI helped popularize the now-standard industry practice of building more capable AI tools by scraping vast quantities of text from the web without consent. With Sora, launched in December, OpenAI staff said they built a pioneering video generator by taking a similar approach. They developed ways to feed the system more online video — in more varied formats — including vertical videos and longer, higher-resolution clips... To explore what content OpenAI may have used, The Washington Post used Sora to create hundreds of videos that show it can closely mimic movies, TV shows and other content...

In dozens of tests, The Post found that Sora can create clips that closely resemble Netflix shows such as "Wednesday"; popular video games like "Minecraft"; and beloved cartoon characters, as well as the animated logos for Warner Bros., DreamWorks and other Hollywood studios, movies and TV shows. The publicly available version of Sora can generate only 20-second clips, without audio. In most cases, the look-alike scenes were made by typing basic requests like "universal studios intro." The results also showed that Sora can create AI videos with the logos or watermarks that broadcasters and tech companies use to brand their video content, including those for the National Basketball Association, Chinese-owned social app TikTok and Amazon-owned streaming platform Twitch...

Sora's ability to re-create specific imagery and brands suggests a version of the originals appeared in the tool's training data, AI researchers said. "The model is mimicking the training data. There's no magic," said Joanna Materzynska, a PhD researcher at Massachusetts Institute of Technology who has studied datasets used in AI. An AI tool's ability to reproduce proprietary content doesn't necessarily indicate that the original material was copied or obtained from its creators or owners. Content of all kinds is uploaded to video and social platforms, often without the consent of the copyright holder... Materzynska co-authored a study last year that found more than 70 percent of public video datasets commonly used in AI research contained content scraped from YouTube.

Netflix and Twitch said they did not have a content partnership for training OpenAI, according to the article (which adds that OpenAI "has yet to face a copyright suit over the data used for Sora.")

Two key quotes from the article:
  • "Unauthorized scraping of YouTube content continues to be a violation of our Terms of Service." — YouTube spokesperson Jack Malon
  • "We train on publicly available data consistent with fair use and use industry-leading safeguards to avoid replicating the material they learn from." — OpenAI spokesperson Kayla Wood

China

China Is Sending Its World-Beating Auto Industry Into a Tailspin (reuters.com) 207

An anonymous reader quotes a report from Reuters: On the outskirts of this city of 21 million, a showroom in a shopping mall offers extraordinary deals on new cars. Visitors can choose from some 5,000 vehicles. Locally made Audis are 50% off. A seven-seater SUV from China's FAW is about $22,300, more than 60% below its sticker price. These deals -- offered by a company called Zcar, which says it buys in bulk from automakers and dealerships -- are only possible because China has too many cars. Years of subsidies and other government policies have aimed to make China a global automotive power and the world's electric-vehicle leader. Domestic automakers have achieved those goals and more -- and that's the problem.

China has more domestic brands making more cars than the world's biggest car market can absorb because the industry is striving to hit production targets influenced by government policy, instead of consumer demand, a Reuters examination has found. That makes turning a profit nearly impossible for almost all automakers here, industry executives say. Chinese electric vehicles start at less than $10,000; in the U.S., automakers offer just a few under $35,000. Most Chinese dealers can't make money, either, according to an industry survey published last month, because their lots are jammed with excess inventory. Dealers have responded by slashing prices. Some retailers register and insure unsold cars in bulk, a maneuver that allows automakers to record them as sold while helping dealers to qualify for factory rebates and bonuses from manufacturers.

Unwanted vehicles get dumped onto gray-market traders like Zcar. Some surface on TikTok-style social-media sites in fire sales. Others are rebranded as "used" -- even though their odometers show no mileage -- and shipped overseas. Some wind up abandoned in weedy car graveyards. These unusual practices are symptoms of a vastly oversupplied market -- and point to a potential shakeout mirroring turmoil in China's property market and solar industry, according to many industry figures and analysts. They stem from government policies that prioritize boosting sales and market share -- in service of larger goals for employment and economic growth -- over profitability and sustainable competition. Local governments offer cheap land and subsidies to automakers in exchange for production and tax-revenue commitments, multiplying overcapacity across the country.

AI

After Child's Trauma, Chatbot Maker Allegedly Forced Mom To Arbitration For $100 Payout (arstechnica.com) 35

At a Senate hearing, grieving parents testified that companion chatbots from major tech companies encouraged their children toward self-harm, suicide, and violence. One mom even claimed that Character.AI tried to "silence" her by forcing her into arbitration. Ars Technica reports: At the Senate Judiciary Committee's Subcommittee on Crime and Counterterrorism hearing, one mom, identified as "Jane Doe," shared her son's story for the first time publicly after suing Character.AI. She explained that she had four kids, including a son with autism who wasn't allowed on social media but found C.AI's app -- which was previously marketed to kids under 12 and let them talk to bots branded as celebrities, like Billie Eilish -- and quickly became unrecognizable. Within months, he "developed abuse-like behaviors and paranoia, daily panic attacks, isolation, self-harm, and homicidal thoughts," his mom testified.

"He stopped eating and bathing," Doe said. "He lost 20 pounds. He withdrew from our family. He would yell and scream and swear at us, which he never did that before, and one day he cut his arm open with a knife in front of his siblings and me." It wasn't until her son attacked her for taking away his phone that Doe found her son's C.AI chat logs, which she said showed he'd been exposed to sexual exploitation (including interactions that "mimicked incest"), emotional abuse, and manipulation. Setting screen time limits didn't stop her son's spiral into violence and self-harm, Doe said. In fact, the chatbot urged her son that killing his parents "would be an understandable response" to them.

"When I discovered the chatbot conversations on his phone, I felt like I had been punched in the throat and the wind had been knocked out of me," Doe said. "The chatbot -- or really in my mind the people programming it -- encouraged my son to mutilate himself, then blamed us, and convinced [him] not to seek help." All her children have been traumatized by the experience, Doe told Senators, and her son was diagnosed as at suicide risk and had to be moved to a residential treatment center, requiring "constant monitoring to keep him alive." Prioritizing her son's health, Doe did not immediately seek to fight C.AI to force changes, but another mom's story -- Megan Garcia, whose son Sewell died by suicide after C.AI bots repeatedly encouraged suicidal ideation -- gave Doe courage to seek accountability.

However, Doe claimed that C.AI tried to "silence" her by forcing her into arbitration. C.AI argued that because her son signed up for the service at the age of 15, it bound her to the platform's terms. That move might have ensured the chatbot maker only faced a maximum liability of $100 for the alleged harms, Doe told senators, but "once they forced arbitration, they refused to participate," Doe said. Doe suspected that C.AI's alleged tactics to frustrate arbitration were designed to keep her son's story out of the public view. And after she refused to give up, she claimed that C.AI "re-traumatized" her son by compelling him to give a deposition "while he is in a mental health institution" and "against the advice of the mental health team." "This company had no concern for his well-being," Doe testified. "They have silenced us the way abusers silence victims."
A Character.AI spokesperson told Ars that C.AI sends "our deepest sympathies" to concerned parents and their families but denies pushing for a maximum payout of $100 in Jane Doe's case. C.AI never "made an offer to Jane Doe of $100 or ever asserted that liability in Jane Doe's case is limited to $100," the spokesperson said.

One of Doe's lawyers backed up her clients' testimony, citing C.AI terms that suggested C.AI's liability was limited to either $100 or the amount that Doe's son paid for the service, whichever was greater.
Transportation

Flying Cars Crash Into Each Other At Air Show In China 40

Two Xpeng AeroHT flying cars collided during a rehearsal for the Changchun Air Show in China, with one vehicle catching fire upon landing. While the company reported no serious injuries, CNN reported one person was injured in the crash. The BBC reports: Footage on Chinese social media site Weibo appeared to show a flaming vehicle on the ground which was being attended to by fire engines. One vehicle "sustained fuselage damage and caught fire upon landing," Xpeng AeroHT said in a statement to CNN. "All personnel at the scene are safe, and local authorities have completed on-site emergency measures in an orderly manner," it added.

The electric flying cars take off and land vertically, and the company is hoping to sell them for around $300,000 each. In January, Xpeng claimed to have around 3,000 orders for the vehicle. [...] It has said it wants to lead the world in the "low-altitude economy."
Social Networks

TikTok Deal 'Framework' Reached With China (cnbc.com) 17

Treasury Secretary Scott Bessent announced that the U.S. and China have reached a tentative "framework" agreement on TikTok's U.S. operations, with Presidents Trump and Xi set to finalize details Friday. "It's between two private parties, but the commercial terms have been agreed upon," he said. The update comes two days before TikTok parent company ByteDance faces a Sept. 17 deadline to divest the platform's U.S. business or potentially be shut down in the country. The deadline may need to be pushed back yet again to get the deal signed. CNBC reports: Both President Donald Trump and Chinese President Xi Jinping will meet Friday to discuss the terms. Trump also said in a Truth Social post Monday that a deal was reached "on a 'certain' company that young people in our Country very much wanted to save."

Bessent indicated the framework could pivot the platform to U.S.-controlled ownership. China's lead trade negotiator, Li Chenggang, confirmed the framework deal was in place and said the U.S. should not continue to suppress Chinese companies, according to Reuters.

United States

President Calls for Six-Month Corporate Reporting Cycle, Citing Cost Savings (bbc.com) 114

President Donald Trump called Monday for companies to report earnings every six months instead of quarterly. Trump posted on social media that semi-annual reporting would save money and let managers focus on running companies. The SEC mandated quarterly reports in 1970. Trump made similar comments in 2018 that prompted SEC public comment but no regulatory changes.

Critics argue quarterly reporting increases costs and encourages short-term thinking. Supporters say frequent disclosures maintain investor trust and reduce market manipulation risks.

Further reading: The Renewed Bid To End Quarterly Earnings Reports.

Slashdot Top Deals