AI technology Israel to attack Hamas, Lee Jae-myung 'reducing polarization with AI'
Israel used new AI-infused military technology to find a Hamas commander whose location had been tracked despite wiretapping, and killed him in an airstrike on September 31, 2023, killing 125 civilians in the vicinity.
A New York Times investigation, citing three Israeli and American officials familiar with the case, reported on the 25th that engineers at Unit 8200, Israel’s equivalent of the National Security Agency, were integrating AI into military technology tools.
The short-lived audio tool is just one example of how Israel has used the Gaza war to rapidly test and deploy AI-enabled military technology at a level never seen before, according to nine American and Israeli defense officials interviewed by the New York Times.
Presidential candidate Lee Jae-myung announced "reducing polarization with AI" and "fair opportunity results" at a TV debate on the 25th.
On the 22nd, the candidate had a private conversation with Professor Israel Harari at the National Assembly, saying that “the world’s conflicts come from inequality” and that “AI will solve the inequality gap.”
Professor Harari called for “employment education and welfare expansion” in “increasing the role of the government.”
Politico reported that US President Trump’s eldest son will establish a social club called “Executive Branch” with David Sex, the Trump administration’s AI virtual currency chief, ahead of his visit to Korea on the 30th.
The exclusive social club, “Executive Branch,” will open in Washington D.C. with a membership fee of $5 billion, and Omid Malik, who previously co-founded the venture capital firm 1789 Capital with Trump Jr., will open the event. Politico released an invitation on the same day.
During the past 18 months of the Gaza war, Israel has combined AI with facial recognition software to match partially obscured or injured faces to real identities, used AI to identify potential airstrike targets, and created an Arabic AI model to power chatbots that can scan and analyze text messages, social media posts, and other Arabic-language data. “Much of this effort has come from partnerships between enlisted men from Unit 8200 and reservists who work at tech companies like Google, Microsoft, and Meta,” the Times reported. “Unit 8200 has also created what it calls “The Studio,” an innovation hub and a place to connect experts with AI projects.”
“The urgent need to respond to crises has accelerated innovation, much of it in AI,” Hadas Lorber, director of the Institute for Applied Research in Responsible A.I. at Israel’s Holon Institute of Technology and a former senior director of Israel’s National Security Council, told the Times.
“This has led to technologies that are game-changing on the battlefield and advantages that have proven decisive in combat.” But these technologies also raise serious ethical questions, Lorber said. “AI requires checks and balances, and humans must make the final decisions.”
Israeli and U.S. officials have acknowledged to the Times that even as Israel raced to develop an AI arsenal, deployment of the technology has sometimes led to civilian deaths, as well as false identifications and arrests.
Some officials have struggled with the ethical implications of AI tools, which they say could lead to increased surveillance and other civilian killings, the anonymous sources said.
European and U.S. defense officials have warned that “no country is more aggressive in experimenting with AI tools in real-time combat than Israel,” and that “they provide a glimpse into how these technologies might be used in future wars, and how they could go wrong.”
Israel used the conflicts in Gaza and Lebanon before the Gaza war to experiment with and develop military-grade tools such as drones, phone-hacking tools, and the Iron Dome defense system, which could help intercept short-range ballistic missiles.
Four Israeli officials said that AI technology was rapidly deployed in the hunt for 250 hostages shortly after Hamas launched a cross-border attack on Israel on Oct. 7, 2023.
Officials say this has allowed the 8200th Unit and the reserves at “The Studio” to work together to rapidly develop new AI capabilities. Avi Hasson, CEO of Startup Nation Central, an Israeli nonprofit that connects investors and companies, told the Times that the reserves from Meta, Google and Microsoft have been instrumental in driving innovation in drone and data integration. “They bring know-how and access to key technologies that the military can’t use,” Hasson told the Times. The Israeli military soon began using artificial intelligence to bolster its drone fleet. “We’ve built a drone that can lock onto and track targets from a distance using AI-based algorithms,” Aviv Shapira, founder and CEO of XTEND, a software and drone company that partners with the Israeli military, told the Times. “In the past, homing relied on focusing on an image of the target, but now the AI can recognize and track the object itself with great precision, whether it’s a moving vehicle or a person.” Shapira added that both the Israeli military and the U.S. Department of Defense, which are major customers, are aware of the ethical implications of AI in warfare and have discussed responsible use of the technology.
Three Israeli officers told the Times that one of the tools developed by Israel’s “The Studio” was an Arabic-language AI model known as a large language model.
The large language model was first reported on March 6 by the Israeli-Palestinian news site +972. “According to our own investigations, LocalCall and the Guardian, the Israeli military is developing a new AI tool similar to ChatGPT and training it on millions of Arabic conversations obtained through surveillance of Palestinians in the occupied territories,” +972 magazine said. “The AI tool, which is being built under the auspices of Unit 8200, an elite cyberwarfare squad under Israel’s Military Intelligence Directorate, is known as a Large-Scale Language Model (LLM), a machine learning program that can analyze information and generate, translate, predict and summarize text.” The magazine continued, “While publicly available LLMs, such as ChatGPT’s engine, are trained on information scraped from the internet, the new model being developed by the Israeli army is being fed vast amounts of information gleaned from the everyday lives of Palestinians living under occupation.”
The existence of Unit 8200’s LLM was confirmed to +972, Local Call and the Guardian by three Israeli security sources with knowledge of its development, who said the model was still being trained in the second half of last year, and it is unclear whether it has been deployed yet or how exactly the army will use it.
The report continued, citing sources, that “a key advantage for the Israeli army lies in the tool’s ability to quickly process large amounts of surveillance data to ‘answer questions’ about specific individuals,” adding that “given that the army is already using smaller language models, LLM could potentially further expand Israel’s prosecution and arrest of Palestinians.” The Israeli military’s AI application had previously been hampered by a lack of Arabic data from its developers’ training, which made it difficult to build models because the data was mostly written in standard Arabic, which is more formal than the dozens of dialects used in spoken Arabic.
The Israeli military no longer has that problem, the three officers told the Times.
For decades, Israel has intercepted text messages, transcribed phone calls, and scraped social media posts in dialects from Palestinians, including those in the Ghazieh Strip.
As a result, Israeli officers built a large-scale language model in the early months of the war and built a chatbot that could run queries in Arabic. They merged the tool with a multimedia database, which eventually allowed analysts to run sophisticated searches on images and videos, the four Israeli officials told the Times.
When Israel assassinated Hezbollah leader Hassan Nasrallah in September, the chatbot analyzed reactions from the Arabic-speaking world, the three Israeli officers testified. The technology helped Israel gauge public sentiment by distinguishing between different dialects of Lebanon, helping it gauge whether there was public pressure to strike back.
On October 7, 2023, Israeli intelligence failed to find Ibrahim Biari, a Hamas commander in northern Gaza who had helped plan the massacre, and believed he was hiding in a network of tunnels underground in Gaza.
Israeli officers turned to new AI-infused military technology to track Biari, and shortly thereafter, Israel tested an AI audio tool that listened in on Biari’s calls, revealing the general’s approximate location.
Based on this information, Israel ordered an airstrike on the area on October 31, 2023, killing Biari and killing more than 125 civilians, according to Airwars, a London-based conflict monitoring group. Since then, Israeli intelligence has used the audio tool, along with maps and photos of Gaza’s maze of underground tunnels, to find hostages, and over time the tool has been improved to better identify individuals, two Israeli officers told the Times.
The chatbot occasionally failed to identify some modern slang and words transliterated from English into Arabic, the two officials said.
In response, “Israeli intelligence officers with expertise in various dialects had to review and correct its work,” one officer told the Times.
The chatbot sometimes gave incorrect answers. “It returned a picture of a pipe instead of a gun,” the two Israeli officers said, noting one glitch.
Since the Oct. 7 attack, Israeli military checkpoints set up between northern and southern Gaza have begun to be equipped with cameras that can scan high-resolution images of Palestinians and feed them to an AI-powered facial recognition program. Israel’s AI-powered facial recognition system has struggled to “identify people with obscured faces,” and the Israeli military has been “arresting and interrogating Palestinians” in an attempt to address “false flags from the facial recognition system,” an Israeli intelligence officer told the Times.
Regarding Israel’s large-scale language model LLM, Ori Goshen, co-CEO and co-founder of Israeli company AI21 Labs, testified that “the benefits LLM offers to intelligence agencies could include the ability to quickly process information and generate lists of ‘suspects’ for arrest,” and “the key is the ability to search through data scattered across multiple sources,” reported +972.
The magazine said that “the Israeli military used the Lavender program to generate ‘kill lists’ of tens of thousands of Palestinians, and the AI ‘convicted’ them of showing traits that were learned to be associated with members of armed groups,” and that “the Israeli military subsequently bombed many of these individuals while they were at home with their families, despite the program reportedly having a 10% error rate.” The magazine said, “The human oversight of the assassination process was merely a ‘rubber stamp,’” and that “the soldiers treated Lavender’s ‘kill list’ output ‘as if it were a human decision.’”
The CIA has developed tools similar to ChatGPT to analyze open source information, and the British intelligence service is developing its own LLM.
“Former British and American security officials told +972, LocalCall, and the Guardian that Israeli intelligence agencies are taking greater risks than their American or British counterparts when it comes to incorporating AI systems into their intelligence analysis,” the magazine said.
Brianna Rosen, a former White House security official and now a research fellow in military and security studies at Oxford University, said that intelligence analysts using tools like ChatGPT could potentially “spot threats that humans might miss before they happen,” but “they run the risk of making the wrong connections, drawing the wrong conclusions, and making mistakes, some of which could have very serious consequences.”
The chatbots of Israel’s 8200th Unit were trained on 100 billion words of Arabic, acquired through Israel’s massive surveillance of Palestinians under its military control.
Experts warn that this is a serious violation of Palestinian rights.
“We’re talking about people who have absolutely no criminal record being accused of something, and then having their personal information collected to train a tool that could later prove their suspicions,” Zach Campbell, senior technology researcher at Human Rights Watch, told +972, LocalCall, and the Guardian.
“Palestinians have become the guinea pigs in Israel’s labs developing these technologies and weaponizing AI,” Nadim Nashif, executive director and founder of the Palestinian digital rights and advocacy group 7amleh, told the Guardian. “The whole purpose is to maintain the apartheid and occupation regimes that use these technologies to dominate people and control their lives, and this is a serious and ongoing violation of Palestinian digital rights, and therefore human rights.”
On the 25th, presidential candidate Lee Jae-myung said about polarization in the 3rd TV debate, "The main cause is that our society's resources and opportunities are concentrated on one side, preventing us from properly utilizing efficiency." He said, "We need to find new areas of growth. AI, renewable energy, and the cultural sector are areas where we have strengths." He pointed out, "We can reduce this gap by developing and advancing new industrial sectors and fairly sharing opportunities and results in those areas." He declared, "AI to resolve polarization."
The candidate said to former Professor Harari of Hebrew University of Israel, “The cause of all conflicts and instabilities in the world comes from enormous gaps and inequalities. Since the development of AI technology is a new field, I hope that it can be a means to reduce inequality and gaps in this field. That is the role of politics and the government that designs this system,” and “I said, ‘How about having a sovereign wealth fund invest in AI and secure shares?’, ‘Collecting taxes is also a method, but how about the public participate in the business itself?’ and I was attacked a lot for being a communist,” and said on the 22nd, “Expanding Supply.”
Professor Harari, who was the sole speaker on the topic of AI in the National Assembly, said, “What the government can do is regulate the algorithm.”
He said that “employment retraining and welfare expansion” are the government’s roles.
On the 23rd, the candidate appeared on a right-wing channel and said, “Most of the far left were eliminated through the primary elections of the April 10th general election (last year), and the seven who did not drop out were replaced through nominations,” and referred to the “dropouts due to party member sovereignty nominations” as “elimination of the far left.” See <Extremist economic vision lacks concrete alternatives for the far right, Lee Jae-myung 'Tax-free labor reduction with AI', March 3, 2025>
<Democratic Party 'Right Party Members' Collective Intelligence Sovereignty' National Treasury Support Lee Jae-myung's Nazi Party System, May 23, 2024>
<Lee Jae-myung's Park Chung-hee-style command economy restoration 'KOSPI 5,000 surge' pledge, January 3, 2022>
,
'안보' 카테고리의 다른 글
AI 기술 이스라엘이 하마스 공략, 이재명 'AI로 양극화 완화' (0) | 2025.04.26 |
---|---|
Chinese Trend Trump Power 4 Demonized AI Image Generation (0) | 2025.04.25 |
중국 유행 트럼프 권력자 4명 악마화 인공지능 이미지 생성 (0) | 2025.04.25 |
Trump tariff architect Navarro leads trade war on ‘China harming American workers’ (0) | 2025.04.21 |
트럼프관세의 설계자 나바로 ‘중국이 미국노동자 해악’ 무역전쟁 지휘 (0) | 2025.04.21 |