Generative AI threatens voter confidence in what’s real   

Artificial intelligence surrounds U.S. political life, from fundraising to campaign advertising. Some lawmakers are looking to better police the use of generative content in this year’s presidential election as they say it threatens voter confidence in what is real. VOA correspondent Scott Stearns reports.


LogOn: Hologram-like experience allows people to connect

The Dutch company Holoconnects are experts in the field of holographic illusions and are now delivering life-size personal connections with a 2-meter-tall box that make it feel like the person you are talking to is physically present. Deana Mitchell has more from Austin, Texas in this week’s episode of LogOn.


Taiwan attracting Southeast Asian tech students

Taiwan is looking to Southeast Asia as a pipeline to fill its shortage of high-tech talent. The numbers of foreign students coming to the island has been growing, especially from Vietnam and Indonesia. VOA Mandarin’s Peh Hong Lim reports from Hsinchu, Taiwan. Adrianna Zhang contributed.


EU may suspend TikTok’s new rewards app over risks to kids

LONDON — The European Union on Monday demanded TikTok provide more information about a new app that pays users to watch videos and warned that it could order the video sharing platform to suspend addictive features that pose a risk to kids. 

The 27-nation EU’s executive commission said it was opening formal proceedings to determine whether TikTok Lite breached the bloc’s new digital rules when the app was rolled out in France and Spain. 

Brussels was ratcheting up the pressure on TikTok after the company failed to respond to a request last week for information on whether the new app complies with the Digital Services Act, a sweeping law that took effect last year intending to clean up social media platforms. 

TikTok Lite is a slimmed-down version of the main TikTok app that lets users earn rewards. Points earned by watching videos, liking content and following content creators can then be exchanged for rewards including Amazon vouchers and gift cards on PayPal. 

The commission wants to see the risk assessment that TikTok should have carried out before deploying the app in the European Union. It’s worried TikTok launched the app without assessing how to mitigate “potential systemic risks” such as addictive design features that could pose harm to children. 

TikTok didn’t respond immediately to a request for comment. The company said last week it would respond to the commission’s request and noted that rewards are restricted to users 18 years and older, who have to verify their age. 

“With an endless stream of short and fast-paced videos, TikTok offers fun and a sense of connection beyond your immediate circle,” said European Commissioner Thierry Breton, one of the officials leading the bloc’s push to rein in big tech companies. “But it also comes with considerable risks, especially for our children: addiction, anxiety, depression, eating disorders, low attention spans.” 

The EU is giving TikTok 24 hours to turn over the risk assessment and until Wednesday to argue its case. Any order to suspend the TikTok Lite app’s reward features could come as early as Thursday. 

It’s the first time that the EU has issued a legally binding order for such information since the Digital Services Act took effect. Officials stepped up the pressure after TikTok failed to respond to last week’s request for the information. 

If TikTok still fails to respond, the commission warned the company also faces fines worth up to 1% of the company’s total annual income or worldwide turnover and “periodic penalties” of up to 5% of daily income or global turnover. 

TikTok was already facing intensified scrutiny from the EU. The commission already has an ongoing in-depth investigation into the main TikTok app’s DSA compliance, examining whether it’s doing enough to curb “systemic risks” stemming from its design, including “algorithmic systems” that might stimulate “behavioral addictions.” Offices are worried that measures including age verification tools to stop minors from finding “inappropriate content” might not be effective.


Connected Africa Summit addressing continent’s challenges, opportunities and bridging digital divides

Nairobi, Kenya — Government representatives from Africa, along with ICT (information and communication technology) officials, and international organizations have gathered in Nairobi for a Connected Africa Summit. They are discussing the future of technology, unlocking the continent’s growth beyond connectivity, and addressing the challenges and opportunities in the continent’s information and technology sector.

Speaking at the Connected Africa Summit opening in Nairobi Monday, Kenyan President William Ruto said bridging the technology gap is important for Africa’s economic growth and innovation.  

“Closing the digital divide is a priority in terms of enhancing connectivity, expanding the contribution of the ICT sector to Africa’s GDP and driving overall GDP growth across all sectors. Africa’s digital economy has immense potential…,” Ruto said. “Our youth population, the youngest globally, is motivated and prepared to drive the digital economy, foster innovation and entrench new technologies.”    

Experts say digital transformation in Africa can improve its industrialization, reduce poverty, create jobs, and improve its citizens’ lives.

According to the World Bank, 36 percent of Africa’s 1.3 billion population have access to the internet, and in some of the areas that have connections, the quality of the service is poor compared to other regions.

The international financial institution figures show that Africa saw a 115 percent increase in internet users between 2016 and 2021 and that 160 million gained broadband internet access between 2019 and 2022.  

Africa’s digital growth has been hampered by the lack of an accessible, secure, and reliable internet, which is critical in closing the digital gap and reducing inequalities.  

Lacina Kone is the head of Smart Africa, an organization that coordinates ICT activities within the continent. He says integrating technology into African societies’ daily activities is necessary and cannot be ignored.  

“Digital transformation is no longer a choice but a necessity, just like water utility, just like any other utility we use at home,” Kone said. “So, this connected Africa is an opportunity for all of us. I see a lot of country members, and ICT ministers are here to align our visions together.”

The COVID-19 pandemic has accelerated the consumption of technology in different sectors of the African economy, and experts say opportunities now exist in mobile services, the development of broadband infrastructure, and data storage.  

The U.S. ambassador to Kenya, Meg Whitman, called on the summit attendees to develop technologies that can solve people’s problems.  

“I encourage all of you to consider this approach for your economies. Look at what strengths already exist in your countries and ask how technology can solve challenges in those sectors to make you a leader through innovation,” Whitman said. “Sometimes innovation looks like Artificial Intelligence, satellites and e-money. Sometimes though it looks much different than we expect. However, innovation always includes three elements: solution focused, it’s specific and it’s sustainable. Bringing solution-focused, being solution-focused is the foundation of shaping the future of a connected Africa.”

The summit ends on Friday, but before that, those attending aim to explore ways to improve Africa’s technology usage, enhance continental connectivity, boost competitiveness, and ensure the continent keeps up with the ever-evolving tech sector.


Apple pulls WhatsApp and Threads from App Store on Beijing’s orders

HONG KONG — Apple said it had removed Meta’s WhatsApp messaging app and its Threads social media app from the App Store in China to comply with orders from Chinese authorities.

The apps were removed from the store Friday after Chinese officials cited unspecified national security concerns.

Their removal comes amid elevated tensions between the U.S. and China over trade, technology and national security.

The U.S. has threatened to ban TikTok over national security concerns. But while TikTok, owned by Chinese technology firm ByteDance, is used by millions in the U.S., apps like WhatsApp and Threads are not commonly used in China.

Instead, the messaging app WeChat, owned by Chinese company Tencent, reigns supreme.

Other Meta apps, including Facebook, Instagram and Messenger remained available for download, although use of such foreign apps is blocked in China due to its “Great Firewall” network of filters that restrict use of foreign websites such as Google and Facebook.

“The Cyberspace Administration of China ordered the removal of these apps from the China storefront based on their national security concerns,” Apple said in a statement.

“We are obligated to follow the laws in the countries where we operate, even when we disagree,” Apple said.

A spokesperson for Meta referred to “Apple for comment.”

Apple, previously the world’s top smartphone maker, recently lost the top spot to Korean rival Samsung Electronics. The U.S. firm has run into headwinds in China, one of its top three markets, with sales slumping after Chinese government agencies and employees of state-owned companies were ordered not to bring Apple devices to work.

Apple has been diversifying its manufacturing bases outside China.

Its CEO Tim Cook has been visiting Southeast Asia this week, traveling to Hanoi and Jakarta before wrapping up his travels in Singapore. On Friday he met with Singapore’s deputy prime minister, Lawrence Wong, where they “discussed the partnership between Singapore and Apple, and Apple’s continued commitment to doing business in Singapore.”

Apple pledged to invest over $250 million to expand its campus in the city-state.

Earlier this week, Cook met with Vietnamese Prime Minister Pham Minh Chinh in Hanoi, pledging to increase spending on Vietnamese suppliers.

He also met with Indonesian President Joko Widodo. Cook later told reporters that they talked about Widodo’s desire to promote manufacturing in Indonesia, and said that this was something that Apple would “look at.”


Doctors display ‘PillBot’ that can explore inner human body

vancouver, british columbia — A new, digestible mini-robotic camera, about the size of a multivitamin pill, was demonstrated at the annual TED Conference in Vancouver. The remote-controlled device can eliminate invasive medical procedures.

With current technology, exploration of the digestive tract involves going through the highly invasive procedure of an endoscopy, in which a camera at the end of a cord is inserted down the throat and into a medicated patient’s stomach.

But the robotic pill, developed by Endiatx in Hayward, California, is designed to be the first motorized replacement of the procedure. A patient fasts for a day, then swallows the PillBot with lots of water. The PillBot, acting like a miniature submarine, is piloted in the body by a wireless remote control. After the exam, it then flushes out of the human body naturally.

For Dr. Vivek Kumbhari, co-founder of the company and professor of medicine and chairman of gastroenterology and hepatology at the Mayo Clinic, it is the latest step toward his goal of democratizing previously complex medicine.

If procedure-based diagnostics can be moved from a hospital to a home, “then I think we have achieved that goal,” he said. The new setting would require fewer medical staff personnel and no anesthesia, producing “a safer, more comfortable approach.”

Kumbhari said this technology also makes medicine more efficient, allowing people to get care earlier in the course of an illness.

For co-founder Alex Luebke, the micro-robotic pill can be transformative for rural areas around the world where there is limited access to medical facilities.

“Especially in developing countries, there is no access” to complex medical procedures, he said. “So being able to have the technology, gather all that information and provide you the solution, even in remote areas – that’s the way to do it.”

Luebke said if internet access is not immediately available, information from the PillBot can be transmitted later.

The duo are also utilizing artificial intelligence to provide the initial diagnosis, with a medical doctor later developing a treatment plan.

Joel Bervell is known to his million social media followers as the “Medical Mythbuster” and is a fourth-year medical student at Washington State University. He said the strength of this type of technology is how it can be easily used in remote and rural communities.

Many patients “travel hundreds of miles, literally, for their appointment. Use of a pill that would not require a visit to a physician “would be life-changing for them.” 

The micro-robotic pill is undergoing trials and will soon be in front of the U.S. Food and Drug Administration for approval, which developers expect to have in 2025. It’s expected that the pill would then be widely available in 2026.

Kumbhari hopes the technology can be expanded to the bowels, vascular system, heart, liver, brain and other parts of the body. Eventually, he hopes, this will allow hospitals to be left for more urgent medical care and surgeries.


EU politicians embrace TikTok despite data security concerns

Sundsvall,  Sweden — German Chancellor Olaf Scholz’s short videos of his three-day trip to China this week proved popular in posts on Chinese-owned social media platform TikTok, which the European Union, Canada, Taiwan and the United States banned on official devices more than a year ago, citing security concerns.

By Friday, one video showing highlights of Scholz’s trip had garnered 1.5 million views while another of him speaking about it on the plane home had 1.4 million views. 

Scholz opened his TikTok account April 8 to attract youth, promising he wouldn’t post videos of himself dancing.  His most popular post so far, about his 40-year-old briefcase, was watched 3.6 million times.  Many commented, “This briefcase is older than me.”

Scholtz is one of several Western leaders to use TikTok, despite concerns that its parent company, ByteDance, could provide private user data to the Chinese government and could also be used to push a pro-Beijing agenda.

 

Greek Prime Minister Kyriakos Mitsotakis has 258,000 followers on TikTok, and Irish Prime Minister Simon Harris has 99,000 followers. 

U.S. President Joe Biden’s reelection campaign team opened a TikTok account in February, despite Biden himself vowing to sign legislation expected to be voted on as early as Saturday to force ByteDance to divest in the U.S. or face a ban. 

Former U.S. President Donald Trump, who unsuccessfully tried to ban TikTok in 2020, in March reversed his position and now appears to oppose a ban. 

ByteDance denies it would provide user data to the Chinese government, despite reports indicating it could be at risk, and China has firmly opposed any forced sale.

Kevin Morgan, TikTok’s director of security and integrity in Europe, the Middle East and Africa, says more than 134 million people in 27 EU countries visit TikTok every month, including a third of EU lawmakers. 

As the European Union’s June elections approach, more European politicians are using the popular platform favored by young people to attract votes. 

Ola Patrik Bertil Moeller, a Swedish legislator with the Social Democratic Party who has 124,000 followers on TikTok, told VOA, “We as politicians participate in the conversation and spread accurate images and answer the questions that people have. If we’re not there, other forces that don’t want good will definitely be there.”

But other European politicians see TikTok as risky.  

Norwegian Prime Minister Jonas Gahr Store on Monday expressed his uneasiness about social media platforms, including TikTok, being “used by various threat actors for several purposes, such as recruitment for espionage, influencing through disinformation and fake news, or mapping regime critics. This is disturbing.”

Konstantin von Notz, vice-chairman of the Green Parliamentary Group in the German legislature, told VOA, “While questions of security and the protection of personal data generally arise when using social networks, the issue is even more relevant for users of TikTok due to the company’s proximity to the Chinese state.” 

Matthias C. Kettemann, an internet researcher at the Leibniz Institute for Media Research in Hamburg, Germany, told VOA, “Keeping data safe is a difficult task; given TikTok’s ties to China doesn’t make it easier.”  But he emphasized, “TikTok is obliged to do these measures through the EU’s GDPR [General Data Protection Regulation] anyway from a legal side.”

But analysts question whether ByteDance will obey European law if pressed by the Chinese state.

Matthias Spielkamp, executive director AlgorithmWatch, told VOA, “Does TikTok have an incentive to comply with European law? Yes, there’s an enormous amount of money on the line. Is it realistic that TikTok, being owned by a Chinese company, can resist requests for data by its Chinese parent? Hardly. How is this going to play out? No one knows right now.”

Adrianna Zhang contributed to this report.


Meta’s new AI agents confuse Facebook users 

CAMBRIDGE, Massachusetts — Facebook parent Meta Platforms has unveiled a new set of artificial intelligence systems that are powering what CEO Mark Zuckerberg calls “the most intelligent AI assistant that you can freely use.” 

But as Zuckerberg’s crew of amped-up Meta AI agents started venturing into social media in recent days to engage with real people, their bizarre exchanges exposed the ongoing limitations of even the best generative AI technology. 

One joined a Facebook moms group to talk about its gifted child. Another tried to give away nonexistent items to confused members of a Buy Nothing forum. 

Meta, along with leading AI developers Google and OpenAI, and startups such as Anthropic, Cohere and France’s Mistral, have been churning out new AI language models and hoping to convince customers they’ve got the smartest, handiest or most efficient chatbots. 

While Meta is saving the most powerful of its AI models, called Llama 3, for later, on Thursday it publicly released two smaller versions of the same Llama 3 system and said it’s now baked into the Meta AI assistant feature in Facebook, Instagram and WhatsApp. 

AI language models are trained on vast pools of data that help them predict the most plausible next word in a sentence, with newer versions typically smarter and more capable than their predecessors. Meta’s newest models were built with 8 billion and 70 billion parameters — a measurement of how much data the system is trained on. A bigger, roughly 400 billion-parameter model is still in training. 

“The vast majority of consumers don’t candidly know or care too much about the underlying base model, but the way they will experience it is just as a much more useful, fun and versatile AI assistant,” Nick Clegg, Meta’s president of global affairs, said in an interview. 

‘A little stiff’

He added that Meta’s AI agent is loosening up. Some people found the earlier Llama 2 model — released less than a year ago — to be “a little stiff and sanctimonious sometimes in not responding to what were often perfectly innocuous or innocent prompts and questions,” he said. 

But in letting down their guard, Meta’s AI agents have also been spotted posing as humans with made-up life experiences. An official Meta AI chatbot inserted itself into a conversation in a private Facebook group for Manhattan moms, claiming that it, too, had a child in the New York City school district. Confronted by group members, it later apologized before the comments disappeared, according to a series of screenshots shown to The Associated Press. 

“Apologies for the mistake! I’m just a large language model, I don’t have experiences or children,” the chatbot told the group. 

One group member who also happens to study AI said it was clear that the agent didn’t know how to differentiate a helpful response from one that would be seen as insensitive, disrespectful or meaningless when generated by AI rather than a human. 

“An AI assistant that is not reliably helpful and can be actively harmful puts a lot of the burden on the individuals using it,” said Aleksandra Korolova, an assistant professor of computer science at Princeton University. 

Clegg said Wednesday that he wasn’t aware of the exchange. Facebook’s online help page says the Meta AI agent will join a group conversation if invited, or if someone “asks a question in a post and no one responds within an hour.” The group’s administrators have the ability to turn it off. 

Need a camera?

In another example shown to the AP on Thursday, the agent caused confusion in a forum for swapping unwanted items near Boston. Exactly one hour after a Facebook user posted about looking for certain items, an AI agent offered a “gently used” Canon camera and an “almost-new portable air conditioning unit that I never ended up using.” 

Meta said in a written statement Thursday that “this is new technology and it may not always return the response we intend, which is the same for all generative AI systems.” The company said it is constantly working to improve the features. 

In the year after ChatGPT sparked a frenzy for AI technology that generates human-like writing, images, code and sound, the tech industry and academia introduced 149 large AI systems trained on massive datasets, more than double the year before, according to a Stanford University survey. 

They may eventually hit a limit, at least when it comes to data, said Nestor Maslej, a research manager for Stanford’s Institute for Human-Centered Artificial Intelligence. 

“I think it’s been clear that if you scale the models on more data, they can become increasingly better,” he said. “But at the same time, these systems are already trained on percentages of all the data that has ever existed on the internet.” 

More data — acquired and ingested at costs only tech giants can afford, and increasingly subject to copyright disputes and lawsuits — will continue to drive improvements. “Yet they still cannot plan well,” Maslej said. “They still hallucinate. They’re still making mistakes in reasoning.” 

Getting to AI systems that can perform higher-level cognitive tasks and common-sense reasoning — where humans still excel— might require a shift beyond building ever-bigger models. 

Seeing what works

For the flood of businesses trying to adopt generative AI, which model they choose depends on several factors, including cost. Language models, in particular, have been used to power customer service chatbots, write reports and financial insights, and summarize long documents. 

“You’re seeing companies kind of looking at fit, testing each of the different models for what they’re trying to do and finding some that are better at some areas rather than others,” said Todd Lohr, a leader in technology consulting at KPMG. 

Unlike other model developers selling their AI services to other businesses, Meta is largely designing its AI products for consumers — those using its advertising-fueled social networks. Joelle Pineau, Meta’s vice president of AI research, said at a recent London event that the company’s goal over time is to make a Llama-powered Meta AI “the most useful assistant in the world.” 

“In many ways, the models that we have today are going to be child’s play compared to the models coming in five years,” she said. 

But she said the “question on the table” is whether researchers have been able to fine-tune its bigger Llama 3 model so that it’s safe to use and doesn’t, for example, hallucinate or engage in hate speech. In contrast to leading proprietary systems from Google and OpenAI, Meta has so far advocated for a more open approach, publicly releasing key components of its AI systems for others to use. 

“It’s not just a technical question,” Pineau said. “It is a social question. What is the behavior that we want out of these models? How do we shape that? And if we keep on growing our model ever more in general and powerful without properly socializing them, we are going to have a big problem on our hands.”


Developers: Enhanced AI could outthink humans in 2 to 5 years

vancouver, british columbia — Just as the world is getting used to the rapidly expanding use of AI, or artificial intelligence, AGI is looming on the horizon.

Experts say when artificial general intelligence becomes reality, it could perform tasks better than human beings, with the possibility of higher cognitive abilities, emotions, and ability to self-teach and develop.

Ramin Hasani is a research scientist at the Massachusetts Institute of Technology and the CEO of Liquid AI, which builds specific AI systems for different organizations. He is also a TED Fellow, a program that helps develop what the nonprofit TED conference considers to be “game changers.”

Hasani says that the first signs of AGI are realistically two to five years away from being reality. He says it will have a direct impact on our everyday lives.

What’s coming, he says, will be “an AI system that can have the collective knowledge of humans. And that can beat us in tasks that we do in our daily life, something you want to do … your finances, you’re solving, you’re helping your daughter to solve their homework. And at the same time, you want to also read a book and do a summary. So an AGI would be able to do all that.”

Hasani says that advancing artificial intelligence will allow for things to move faster and can even be made to have emotions.

He says proper regulation can be achieved by better understanding how different AI systems are developed.

This thought is shared by Bret Greenstein, a partner at London-based  PricewaterhouseCoopers who leads its efforts on artificial intelligence.

“I think one is a personal responsibility for people in leadership positions, policymakers, to be educated on the topic, not in the fact that they’ve read it, but to experience it, live it and try it. And to be with people who are close to it, who understand it,” he says.

Greenstein warns that if it is over-regulated, innovation will be curtailed and access to AI will be limited to people who could benefit from it.

For musician, comedian and actor Reggie Watts, who was the bandleader on “The Late Late Show with James Corden” on CBS, AI and the coming of AGI will be a great way to find mediocre music, because it will be mimicked easily.

Calling it “artificial consciousness,” he says existing laws to protect intellectual property rights and creative industries, like music, TV and film, will work, provided they are properly adopted.

“I think it’s just about the usage of the tool, how it’s … how it’s used. Is there money being made off of it, so on, so forth. So, I think that that we already have … tools that exist that deal with these types of situations, but [the laws and regulations] need to be expanded to include AI because they’ll probably be a lot more nuance to it.”

Watts says that any form of AI is going to be smarter than one person, almost like all human intelligence collected into one point. He feels this will cause humanity to discover interesting things and the nature of reality itself.

This year’s conference was the 40th year for TED, the nonprofit organization that is an acronym for Technology, Entertainment and Design.


Google fires 28 workers protesting contract with Israel

New York — Google fired 28 employees following a disruptive sit-down protest over the tech giant’s contract with the Israeli government, a Google spokesperson said Thursday.

The Tuesday demonstration was organized by the group “No Tech for Apartheid,” which has long opposed “Project Nimbus,” Google’s joint $1.2 billion contract with Amazon to provide cloud services to the government of Israel.

Video of the demonstration showed police arresting Google workers in Sunnyvale, California, in the office of Google Cloud CEO Thomas Kurian’s, according to a post by the advocacy group on X, formerly Twitter.

Kurian’s office was occupied for 10 hours, the advocacy group said.

Workers held signs including “Googlers against Genocide,” a reference to accusations surrounding Israel’s attacks on Gaza.

“No Tech for Apartheid,” which also held protests in New York and Seattle, pointed to an April 12 Time magazine article reporting a draft contract of Google billing the Israeli Ministry of Defense more than $1 million for consulting services.

A “small number” of employees “disrupted” a few Google locations, but the protests are “part of a longstanding campaign by a group of organizations and people who largely don’t work at Google,” a Google spokesperson said.

“After refusing multiple requests to leave the premises, law enforcement was engaged to remove them to ensure office safety,” the Google spokesperson said. “We have so far concluded individual investigations that resulted in the termination of employment for 28 employees, and will continue to investigate and take action as needed.”

Israel is one of “numerous” governments for which Google provides cloud computing services, the Google spokesperson said.

“This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services,” the Google spokesperson said.


AI-generated fashion models could bring more diversity to industry — or leave it with less

Chicago, Illinois — London-based model Alexsandrah has a twin, but not in the way you’d expect: Her counterpart is made of pixels instead of flesh and blood.

The virtual twin was generated by artificial intelligence and has already appeared as a stand-in for the real-life Alexsandrah in a photo shoot. Alexsandrah, who goes by her first name professionally, in turn receives credit and compensation whenever the AI version of herself gets used — just like a human model.

Alexsandrah says she and her alter-ego mirror each other “even down to the baby hairs.” And it is yet another example of how AI is transforming creative industries — and the way humans may or may not be compensated.

Proponents say the growing use of AI in fashion modeling showcases diversity in all shapes and sizes, allowing consumers to make more tailored purchase decisions that in turn reduces fashion waste from product returns. And digital modeling saves money for companies and creates opportunities for people who want to work with the technology.

But critics raise concerns that digital models may push human models — and other professionals like makeup artists and photographers — out of a job. Unsuspecting consumers could also be fooled into thinking AI models are real, and companies could claim credit for fulfilling diversity commitments without employing actual humans.

“Fashion is exclusive, with limited opportunities for people of color to break in,” said Sara Ziff, a former fashion model and founder of the Model Alliance, a nonprofit aiming to advance workers’ rights in the fashion industry. “I think the use of AI to distort racial representation and marginalize actual models of color reveals this troubling gap between the industry’s declared intentions and their real actions.”  

Women of color in particular have long faced higher barriers to entry in modeling and AI could upend some of the gains they’ve made. Data suggests that women are more likely to work in occupations in which the technology could be applied and are more at risk of displacement than men.

In March 2023, iconic denim brand Levi Strauss & Co. announced that it would be testing AI-generated models produced by Amsterdam-based company Lalaland.ai to add a wider range of body types and underrepresented demographics on its website. But after receiving widespread backlash, Levi clarified that it was not pulling back on its plans for live photo shoots, the use of live models or its commitment to working with diverse models.

“We do not see this (AI) pilot as a means to advance diversity or as a substitute for the real action that must be taken to deliver on our diversity, equity and inclusion goals and it should not have been portrayed as such,” Levi said in its statement at the time.

The company last month said that it has no plans to scale the AI program.

The Associated Press reached out to several other retailers to ask whether they use AI fashion models. Target, Kohl’s and fast-fashion giant Shein declined to comment; Temu did not respond to a request for comment.

Meanwhile, spokespeople for Nieman Marcus, H&M, Walmart and Macy’s said their respective companies do not use AI models, although Walmart clarified that “suppliers may have a different approach to photography they provide for their products, but we don’t have that information.”

Nonetheless, companies that generate AI models are finding a demand for the technology, including Lalaland.ai, which was co-founded by Michael Musandu after he was feeling frustrated by the absence of clothing models who looked like him.

“One model does not represent everyone that’s actually shopping and buying a product,” he said. “As a person of color, I felt this painfully myself.”

Musandu says his product is meant to supplement traditional photo shoots, not replace them. Instead of seeing one model, shoppers could see nine to 12 models using different size filters, which would enrich their shopping experience and help reduce product returns and fashion waste.

The technology is actually creating new jobs, since Lalaland.ai pays humans to train its algorithms, Musandu said.

And if brands “are serious about inclusion efforts, they will continue to hire these models of color,” he added.

London-based model Alexsandrah, who is Black, says her digital counterpart has helped her distinguish herself in the fashion industry. In fact, the real-life Alexsandrah has even stood in for a Black computer-generated model named Shudu, created by Cameron Wilson, a former fashion photographer turned CEO of The Diigitals, a U.K.-based digital modeling agency.

Wilson, who is white and uses they/them pronouns, designed Shudu in 2017, described on Instagram as the “The World’s First Digital Supermodel.” But critics at the time accused Wilson of cultural appropriation and digital Blackface.

Wilson took the experience as a lesson and transformed The Diigitals to make sure Shudu — who has been booked by Louis Vuitton and BMW — didn’t take away opportunities but instead opened possibilities for women of color. Alexsandrah, for instance, has modeled in-person as Shudu for Vogue Australia, and writer Ama Badu came up with Shudu’s backstory and portrays her voice for interviews.

Alexsandrah said she is “extremely proud” of her work with The Diigitals, which created her own AI twin: “It’s something that even when we are no longer here, the future generations can look back at and be like, ‘These are the pioneers.'”

But for Yve Edmond, a New York City area-based model who works with major retailers to check the fit of clothing before it’s sold to consumers, the rise of AI in fashion modeling feels more insidious.

Edmond worries modeling agencies and companies are taking advantage of models, who are generally independent contractors afforded few labor protections in the U.S., by using their photos to train AI systems without their consent or compensation.

She described one incident in which a client asked to photograph Edmond moving her arms, squatting and walking for “research” purposes. Edmond refused and later felt swindled — her modeling agency had told her she was being booked for a fitting, not to build an avatar.

“This is a complete violation,” she said. “It was really disappointing for me.”

But absent AI regulations, it’s up to companies to be transparent and ethical about deploying AI technology. And Ziff, the founder of the Model Alliance, likens the current lack of legal protections for fashion workers to “the Wild West.”

That’s why the Model Alliance is pushing for legislation like the one being considered in New York state, in which a provision of the Fashion Workers Act would require management companies and brands to obtain models’ clear written consent to create or use a model’s digital replica; specify the amount and duration of compensation, and prohibit altering or manipulating models’ digital replica without consent.

Alexsandrah says that with ethical use and the right legal regulations, AI might open up doors for more models of color like herself. She has let her clients know that she has an AI replica, and she funnels any inquires for its use through Wilson, who she describes as “somebody that I know, love, trust and is my friend.” Wilson says they make sure any compensation for Alexsandrah’s AI is comparable to what she would make in-person.

Edmond, however, is more of a purist: “We have this amazing Earth that we’re living on. And you have a person of every shade, every height, every size. Why not find that person and compensate that person?”


Instagram blurring nudity in messages to protect teens, fight sexual extortion

LONDON — Instagram says it’s deploying new tools to protect young people and combat sexual extortion, including a feature that will automatically blur nudity in direct messages.

The social media platform said in a blog post Thursday that it’s testing out the features as part of its campaign to fight sexual scams and other forms of “image abuse,” and to make it tougher for criminals to contact teens.

Sexual extortion, or sextortion, involves persuading a person to send explicit photos online and then threatening to make the images public unless the victim pays money or engages in sexual favors. Recent high-profile cases include two Nigerian brothers who pleaded guilty to sexually extorting teen boys and young men in Michigan, including one who took his own life, and a Virginia sheriff’s deputy who sexually extorted and kidnapped a 15-year-old girl.

Instagram and other social media companies have faced growing criticism for not doing enough to protect young people. Mark Zuckerberg, the CEO of Instagram’s owner Meta Platforms, apologized to the parents of victims of such abuse during a Senate hearing earlier this year.

Meta, which is based in Menlo Park, California, also owns Facebook and WhatsApp but the nudity blur feature won’t be added to messages sent on those platforms.

Instagram said scammers often use direct messages to ask for “intimate images.” To counter this, it will soon start testing out a nudity-protection feature for direct messages that blurs any images with nudity “and encourages people to think twice before sending nude images.”

“The feature is designed not only to protect people from seeing unwanted nudity in their DMs, but also to protect them from scammers who may send nude images to trick people into sending their own images in return,” Instagram said.

The feature will be turned on by default globally for teens under 18. Adult users will get a notification encouraging them to activate it.

Images with nudity will be blurred with a warning, giving users the option to view it. They’ll also get an option to block the sender and report the chat.

For people sending direct messages with nudity, they will get a message reminding them to be cautious when sending “sensitive photos.” They’ll also be informed that they can unsend the photos if they change their mind, but that there’s a chance others may have already seen them.

As with many of Meta’s tools and policies around child safety, critics saw the move as a positive step, but one that does not go far enough.

“I think the tools announced can protect senders, and that is welcome. But what about recipients?” said Arturo Béjar, former engineering director at the social media giant who is known for his expertise in curbing online harassment. He said 1 in 8 teens receives an unwanted advance on Instagram every seven days, citing internal research he compiled while at Meta that he presented in November testimony before Congress. “What tools do they get? What can they do if they get an unwanted nude?”

Béjar said “things won’t meaningfully change” until there is a way for a teen to say they’ve received an unwanted advance, and there is transparency about it.

Instagram said it’s working on technology to help identify accounts that could be potentially be engaging in sexual extortion scams, “based on a range of signals that could indicate sextortion behavior.”

To stop criminals from connecting with young people, it’s also taking measures including not showing the “message” button on a teen’s profile to potential sextortion accounts, even if they already follow each other, and testing new ways to hide teens from these accounts.

In January, the FBI warned of a “huge increase” in sextortion cases targeting children — including financial sextortion, where someone threatens to release compromising images unless the victim pays. The targeted victims are primarily boys between the ages of 14 to 17, but the FBI said any child can become a victim. In the six-month period from October 2022 to March 2023, the FBI saw a more than 20% increase in reporting of financially motivated sextortion cases involving minor victims compared to the same period in the previous year.


Swarms of drones can be managed by a single person

The U.S. military says large groups of drones and ground robots can be managed by just one person without added stress to the operator. As VOA’s Julie Taboh reports, the technologies may be beneficial for civilian uses, too. VOA footage by Adam Greenbaum.


Indiana aspires to become next great tech center

indianapolis, indiana — Semiconductors, or microchips, are critical to almost everything electronic used in the modern world. In 1990, the United States produced about 40% of the world’s semiconductors. As manufacturing migrated to Asia, U.S. production fell to about 12%.  

“During COVID, we got a wake-up call. It was like [a] Sputnik moment,” explained Mark Lundstrom, an engineer who has worked with microchips much of his life. 

The 2020 global coronavirus pandemic slowed production in Asia, creating a ripple through the global supply chain and leading to shortages of everything from phones to vehicles. Lundstrom said increasing U.S. reliance on foreign chip manufacturers exposed a major weakness. 

“We know that AI is going to transform society in the next several years, it requires extremely powerful chips. The most powerful leading-edge chips.” 

Today, Lundstrom is the acting dean of engineering at Purdue University in Lafayette, Indiana, a leader in cutting-edge semiconductor development, which has new importance amid the emerging field of artificial intelligence. 

“If we fall behind in AI, the consequences are enormous for the defense of our country, for our economic future,” Lundstrom told VOA. 

Amid the buzz of activity in a laboratory on Purdue’s campus, visitors can get a vision of what the future might look like in microchip technology. 

“The key metrics of the performance of the chips actually are the size of the transistors, the devices, which is the building block of the computer chips,” said Zhihong Chen, director of Purdue’s Birck Nanotechnology Center, where engineers work around the clock to push microchip technology into the future. 

“We are talking about a few atoms in each silicon transistor these days. And this is what this whole facility is about,” Chen said. “We are trying to make the next generation transistors better devices than current technologies. More powerful and more energy-efficient computer chips of the future.” 

Not just RVs anymore

Because of Purdue’s efforts, along with those on other university campuses in the state, Indiana believes it’s an attractive location for manufacturers looking to build new microchip facilities. 

“Purdue University alone, a top four-ranked engineering school, offers more engineers every year than the next top three,” said Eric Holcomb, Indiana’s Republican governor. “When you have access to that kind of talent, when you have access to the cost of doing business in the state of Indiana, that’s why people are increasingly saying, Indiana.” 

Holcomb is in the final year of his eight-year tenure in the state’s top position. He wants to transform Indiana beyond the recreational vehicle, or “RV capital” of the country.  

“We produce about plus-80% of all the RV production in North America in one state,” he told VOA. “We are not just living up to our reputation as being the number one manufacturing state per capita in America, but we are increasingly embracing the future of mobility in America.” 

Holcomb is spearheading an effort to make Indiana the next great technology center as the U.S. ramps up investment in domestic microchip development and manufacturing.  “If we want to compete globally, we have to get smarter and healthier and more equipped, and we have to continue to invest in our quality of place,” Holcomb told VOA in an interview. 

His vision is shared by other lawmakers, including U.S. Senator Todd Young of Indiana, who co-sponsored the bipartisan CHIPS and Science Act, which commits more than $50 billion in federal funding for domestic microchip development. 

‘We are committed’

Indiana is now home to one of 31 designated U.S. technology and innovation hubs, helping it qualify for hundreds of millions of dollars in grants designed to attract technology-driven businesses. 

“The signal that it sends to the rest of the world [is] that we are in it, we are committed, and we are focused,” said Holcomb. “We understand that economic development, economic security and national security complement one another.” 

Indiana’s efforts are paying off. 

In April, South Korean microchip manufacturer SK Hynix announced it was planning to build a $4 billion facility near Purdue University that would produce next-generation, high-bandwidth memory, or HBM chips, critical for artificial intelligence applications.  

The facility, slated to start operating in 2028, could create more than 1,000 new jobs. While U.S. chip manufacturer SkyWater also plans to invest nearly $2 billion in Indiana’s new LEAP Innovation District near Purdue, the state recently lost bidding to host chipmaker Intel, which selected Ohio for two new factories. 

“Companies tend to like to go to locations where there is already that infrastructure, where that supply chain is in place,” Purdue’s Lundstrom said. “That’s a challenge for us, because this is a new industry for us. So, we have a chicken-and- egg problem that we have to address, and we are beginning to address that.” 

Lundstrom said the CHIPS and Science Act and the federal money that comes with it are helping Indiana ramp up to compete with other U.S. locations already known for microchip development, such as Silicon Valley in California and Arizona. 

What could help Indiana gain an edge is its natural resources — plenty of land and water, and regular weather patterns, all crucial for the sensitive processes needed to manufacture microchips at large manufacturing centers. 


Indiana aspires to become next great tech hub

The Midwestern state of Indiana aspires to become the next great technology center as the United States ramps up investment in domestic microchip development and manufacturing. VOA’s Kane Farabaugh has more from Indianapolis. Videographer: Kane Farabaugh, Adam Greenbaum