Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

IDF colonel discusses ‘data science magic powder’ for locating terrorists

Published

on

idf-colonel-discusses-‘data-science-magic-powder’-for-locating-terrorists

 
A video has surfaced of a senior official at Israel’s cyber intelligence agency, Unit 8200, talking last year about the use of machine learning “magic powder” to help identify Hamas targets in Gaza.
The footage raises questions about the accuracy of a recent statement about use of artificial intelligence (AI) by the Israeli Defense Forces (IDF), which said it “does not use an artificial intelligence system that identifies terrorist operatives or tries to predict whether a person is a terrorist”.
However, in the video, the head of data science and AI at Unit 8200 – named only as “Colonel Yoav” – said he would reveal an “example of one of the tools we use” before describing how the intelligence division used machine learning techniques in Israel’s May 2021 offensive in Gaza for “finding new terrorists”.
“Let’s say we have some terrorists that form a group and we know only some of them,” he said. “By practising our data science magic powder we are able to find the rest of them.”
The descriptions in the video of technology used by Unit 8200 bear similarities with recent testimony from six IDF insiders about their use of an AI tool called “Lavender” during its offensive on Hamas. They said that the AI-generated database had been used to assist intelligence officers involved in the bombing campaign in Gaza, helping identify tens of thousands of potential human targets.
In its rebuttal, the IDF said some of the accounts were “baseless”. However, the accounts are consistent with the remarks by Yoav during an AI conference at Tel Aviv university in February last year. The video, in which Yoav can be heard talking but not seen, was hosted on the university’s YouTube channel, and until recently it had fewer than 100 views.
As he took to the stage wearing a military uniform, the audience was instructed not to take any photos of Yoav or record his presentation. “It’s good because it’s a bad hair day for me,” he joked.
In the 10-minute presentation – titled “digital transformation and artificial intelligence in the intelligence domain” – the colonel offered a rare insight into how opaque AI systems are being used by secretive military and intelligence bodies.
When using AI to predict whether someone is a terrorist, he explained, Unit 8200 takes information it has about people it believes are members of terrorist groups and aims “to find the rest of the group”.
Referring to a specific example, the official said that in the IDF’s May 2021 military operation in Gaza, his department applied this principle to “find Hamas squad missile commanders and anti-tank missile terrorists in Gaza in order to operate against them”.
He explained that using a form of machine learning – known as “positive unlabelled learning” – “we take the original sub-group, we calculate their close circles, we then calculate relevant features, and at last we rank the results and determine the threshold.”
The colonel said intelligence officers’ feedback is used “to enrich and improve our algorithm” and stressed that “people of flesh and blood” make decisions. “Ethically speaking we put a lot of emphasis on this,” he said, adding that “these tools are meant to help break their barriers”.
According to Yoav, Unit 8200 was able to break “the human barrier” during the May 2021 offensive when it managed to produce more than 200 new targets. “There were times when this amount took us almost a year,” he said.
Contacted for comment about the video, the IDF said the colonel’s participation in the conference had been approved by the military. However, a spokesperson denied that his remarks conflict with the IDF’s recent denials about its use of AI. In a subtle change of wording not used in its original statement, the IDF told the Guardian its AI systems do not “choose targets” for attack.
“The IDF never denied the existence of a database of operatives in terrorist organisations, that cross-checks existing information on such operatives,” it said. “At the same time, the IDF fully stands by its statement that it does not use AI systems that choose targets for attack, and that the database in question is not a list of operatives eligible to attack. There is no contradiction.”
In their testimony, the six intelligence officers who spoke out last week said that the Lavender tool had been used to help identify potential targets at an unprecedented scale and pace. The IDF was not accused of using systems that automatically select targets for attack.
Four of the sources said that, at one stage early in the war, Lavender listed as many as 37,000 men in the enclave who had been linked by the AI system to Hamas or Palestinian Islamic Jihad. None of the sources denied that humans were involved in the process of authorising strikes, although some confessed to minimal human oversight.
“I would invest 20 seconds for each target at this stage, and do dozens of them every day,” one intelligence officer said. “I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”
Their accounts were published by the Israeli-Palestinian publication +972 magazine, its Hebrew-language outlet Local Call and the Guardian.
Col Yoav’s description of the partnership between AI and intelligence personnel echoes a model for targeting operations envisioned by his commander, the Unit 8200 chief Yossi Sariel, who the Guardian revealed left his identity exposed online when he secretly authored a book published in 2021, The Human Machine Team.
At one stage, the colonel explained that the AI and data science department he runs at Unit 8200 is also known internally as “the human machine integration centre”.
Speaking eight months before the IDF commenced operations in Gaza after the Hamas-led 7 October attacks, the colonel spoke optimistically about how the IDF is moving “from the postcard age to the digital era” where “suddenly you can react during battle with applied data-science driven solutions”.
Looking ahead, he added: “I’m curious with respect to how will the next operation look like, digitally speaking”.
Source: theguardian.com
 
The post IDF colonel discusses ‘data science magic powder’ for locating terrorists appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Sainsbury’s aims to be an ‘AI-enabled grocer’ with Microsoft AI technology

Published

on

sainsbury’s-aims-to-be-an-‘ai-enabled-grocer’-with-microsoft-ai-technology

 
Sainsbury’s, a prominent UK supermarket chain, is set to leverage Microsoft’s artificial intelligence and machine learning tools to elevate its store operations and provide customers with a more engaging and convenient shopping experience.
As part of its strategic initiative, the ‘Next Level Sainsbury’s strategy’, the supermarket will integrate generative AI, powered by Microsoft Azure, to enhance its online shopping platform and optimize customers’ search experience. By harnessing AI capabilities, Sainsbury’s aims to offer a more interactive and personalized online shopping journey for its millions of customers across the UK.
In addition to enhancing the online shopping experience, Sainsbury’s plans to equip its store colleagues with real-time data and insights to streamline in-store processes such as shelf replenishment. Leveraging multiple data inputs, including shelf-edge cameras, AI technology will guide colleagues on prioritizing restocking activities, thereby improving efficiency and productivity.
Over the next five years, Sainsbury’s will deploy Microsoft Azure to implement these initiatives, integrating data assets with Microsoft 365 collaboration tools to drive innovation and operational excellence.
Clodagh Moriarty, Chief Retail and Technology Officer at Sainsbury’s, expressed confidence in the collaboration with Microsoft, emphasizing its role in accelerating the supermarket’s ambition to become the UK’s leading AI-enabled grocer. Moriarty highlighted the strategic investment in transformative capabilities, aimed at enhancing efficiency, productivity, and customer service while delivering value to shareholders.
Clare Barclay, CEO of Microsoft UK, commended Sainsbury’s visionary approach, noting its commitment to placing AI at the forefront of its business strategy. Barclay expressed enthusiasm for the collaboration, emphasizing its potential to revolutionize the retail experience for both customers and store colleagues.
The partnership between Sainsbury’s and Microsoft signifies a significant step towards ushering in the next generation of retail, powered by innovative AI-driven solutions.
Source: technologyrecord.com
The post Sainsbury’s aims to be an ‘AI-enabled grocer’ with Microsoft AI technology appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Researchers build AI-driven sarcasm detector

Published

on

researchers-build-ai-driven-sarcasm-detector

 

Artificial intelligence has made remarkable strides, from passing bar exams to reading bedtime stories with emotion. Yet, despite these feats, it still falls short of matching the intricate nuances of human communication—particularly, the art of sarcasm.
However, researchers in the Netherlands are determined to change that narrative. They have developed an AI-driven sarcasm detector that can discern when sarcasm is being used, a feat previously thought to be exclusive to human cognition.
Matt Coler, from the University of Groningen’s speech technology lab, expresses excitement about the project’s progress. He emphasizes the importance of understanding sarcasm, a pervasive aspect of human discourse, to facilitate seamless communication between humans and machines.
Recognizing sarcasm poses challenges due to its subtlety, especially in text-based interactions where cues like tone and facial expressions are absent. To overcome this, researchers trained their AI using a combination of text, audio, and emotional content from popular sitcoms like Friends and The Big Bang Theory.
The AI, trained on annotated data from these shows, demonstrated an impressive ability to detect sarcasm in unlabelled exchanges from the sitcoms, achieving an accuracy rate of nearly 75%. Further enhancements are underway, including incorporating visual cues like eyebrow movements and smirks, to improve accuracy even more.
Beyond enhancing interactions with AI assistants, this technology holds potential for detecting negative language and identifying instances of abuse or hate speech. However, as AI becomes more adept at understanding sarcasm, questions arise about its potential to wield sarcasm itself.
Coler muses about the implications of machines responding with sarcasm, raising concerns about clarity in communication. Nonetheless, advancements in AI-driven sarcasm detection offer promising prospects for improving human-machine interactions and bridging the gap between artificial and human intelligence.
Source: theguardian.com

The post Researchers build AI-driven sarcasm detector appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

AI, bias and experiments: how Women in News is tackling tech’s inbuilt stereotypes

Published

on

ai,-bias-and-experiments:-how-women-in-news-is-tackling-tech’s-inbuilt-stereotypes

 

Issues surrounding bias in AI are deeply rooted in the accuracy, trustworthiness, and quality of data, which, if overlooked, can significantly skew outcomes. Lyndsey Jones, an AI author and transformation coach, delves into these concerns, offering valuable insights for newsrooms on monitoring and reviewing data.
Madhumita Murgia, an AI journalist and the first artificial intelligence editor of the Financial Times, sheds light on how women, migrants, precarious workers, and minority groups are disproportionately affected by the technical limitations of Generative AI. Murgia emphasizes the lack of representation of these groups in the development process of AI technologies, highlighting the need for inclusive participation.
WAN-IFRA Women In News workshops on the Age of AI in the newsroom have brought bias effects to the forefront. Through the Digital ABCs training program, media professionals are equipped with skills to navigate the digital landscape and drive organizational change.
A newly launched module focuses on AI, with over 100 participants in eastern Europe taking part, now extended to journalists in parts of Africa, the Middle East, and Southeast Asia. Instances of bias surfaced during the training, such as generating offensive avatars and misinterpretation of accents in AI tools.
Google CEO Sundar Pichai’s acknowledgment of biased AI tools reflects ongoing concerns in the industry. Timnet Gebru’s dismissal from Google for highlighting biases further underscores the need for vigilance in addressing these issues.
Diverse teams in WIN’s Age of AI program are experimenting with various tools like fact-checking and enhancing staff skill sets in AI usage. Projects under consideration for further EU funding include a video lab for content amplification and an AI avatar for journalist safety.
Media companies must ensure diverse staff collaboration when testing AI tools. Quotas for women in AI research and cross-border partnerships may be necessary for smaller media groups to compete effectively.
Journalists can take steps to improve content quality by examining storytelling practices and ensuring diversity in sources and representation. Consistency of data collection across departments and assessing biases in data sets are crucial for ethical AI usage in journalism. Ultimately, AI tools should be used to enhance journalism’s quality and integrity, rather than generating clickbait or misinformation.
Source: wan-ifra.org

The post AI, bias and experiments: how Women in News is tackling tech’s inbuilt stereotypes appeared first on HIPTHER Alerts.

Continue Reading

Trending