Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

Malicious PowerShell script pushing malware looks AI-written

Published

on

malicious-powershell-script-pushing-malware-looks-ai-written

 
A threat actor is using a PowerShell script that was likely created with the help of an artificial intelligence system such as OpenAI’s ChatGPT, Google’s Gemini, or Microsoft’s CoPilot.
The adversary used the script in an email campaign in March that targeted tens of organizations in Germany to deliver the Rhadamanthys information stealer.
AI-based PowerShell deploys infostealer
Researchers at cybersecurity company Proofpoint attributed the attack to a threat actor tracked as TA547, believed to be an initial access broker (IAB).
TA547, also known as Scully Spider, has been active since at least 2017 delivering a variety of malware for Windows (ZLoader/Terdot, Gootkit, Ursnif, Corebot, Panda Banker, Atmos) and Android (Mazar Bot, Red Alert) systems.
Recently, the threat actor started using the Rhadamanthys modular stealer that constantly expands its data collection capabilities (clipboard, browser, cookies).
Proofpoint has been tracking TA547 since 2017 and said that this campaign was the first one where the threat actor was observed using Rhadamanthys malware.
The info stealer has been distributed since September 2022 to multiple cybercrime groups under the malware-as-a-service (MaaS) model.
According to Proofpoint researchers, TA547 impersonated the Metro cash-and-carry German brand in a recent email campaign using invoices as a lure for “dozens of organizations across various industries in Germany.”
The messages included a ZIP archive protected with the password ‘MAR26’, which contained a malicious shortcut file (.LNK). Accessing the shortcut file triggered PowerShell to run a remote script.
“This PowerShell script decoded the Base64-encoded Rhadamanthys executable file stored in a variable and loaded it as an assembly into memory and then executed the entry point of the assembly” – Proofpoint
The researchers explain that this method allowed the malicious code to be executed in memory without touching the disk.
Analyzing the PowerShell script that loaded Rhadamanthys, the researchers noticed that it included a pound/hash sign (#) followed by specific comments for each component, which are uncommon in human-created code.
 

Suspected AI-generated PowerShell script used in TA547 attack
source: Proofpoint

The researchers note that these characteristics are typical to code originating from generative AI solutions like ChatGPT, Gemini, or CoPilot.
While they cannot be absolutely certain that the PowerShell code came from a large language model (LLM) solution, the researchers say that the script content suggests the possibility of TA547 using generative AI for writing or rewriting the PowerShell script.
Daniel Blackford, director of Threat Research at Proofpoint, clarified for BleepingComputer that while developers are great at writing code, their comments are usually cryptic, or at least unclear and with grammatical errors.
“The PowerShell script suspected of being LLM-generated is meticulously commented with impeccable grammar. Nearly every line of code has some associated comment,” Blackford told BleepingComputer.
Additionally, based on the output from experiments with LLMs generating code, the researchers have high to medium confidence that the script TA547 used in the email campaign was generated using this type of technology.
BleepingComputer used ChatGPT-4 to create a similar PowerShell script and the output code looked like the one seen by Proofpoint, including variable names and comments, further indicating it is likely that AI was used to generate the script.

Sample PowerShell script generated with ChatGPT
source: BleepingComputer

Another theory is that they copied it from a source that relied on generative AI for coding.
AI for malicious activities
Since OpenAI released ChatGPT in late 2022, financially motivated threat actors have been leveraging AI power to create customized or localized phishing emails, run network scans to identify vulnerabilities on hosts or networks, or build highly credible phishing pages.
Some nation-state actors associated with China, Iran, and Russia have also turned to generative AI to improve productivity when researching targets, cybersecurity tools, and methods to establish persistence and evade detection, as well as scripting support.
In mid-February, OpenAI announced that it blocked accounts associated with state-sponsored hacker groups Charcoal Typhoon, Salmon Typhoon (China), Crimson Storm (Iran), Emerald Sleet (North Korea), and Forest Blizzard (Russia) abusing ChatGPT for malicious purposes.
As most large language learning models attempt to restrict output if it could be used for malware or malicious behavior, threat actors have launched their own AI Chat platforms for cybercriminals.
Source: bleepingcomputer.com
The post Malicious PowerShell script pushing malware looks AI-written appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Sainsbury’s aims to be an ‘AI-enabled grocer’ with Microsoft AI technology

Published

on

sainsbury’s-aims-to-be-an-‘ai-enabled-grocer’-with-microsoft-ai-technology

 
Sainsbury’s, a prominent UK supermarket chain, is set to leverage Microsoft’s artificial intelligence and machine learning tools to elevate its store operations and provide customers with a more engaging and convenient shopping experience.
As part of its strategic initiative, the ‘Next Level Sainsbury’s strategy’, the supermarket will integrate generative AI, powered by Microsoft Azure, to enhance its online shopping platform and optimize customers’ search experience. By harnessing AI capabilities, Sainsbury’s aims to offer a more interactive and personalized online shopping journey for its millions of customers across the UK.
In addition to enhancing the online shopping experience, Sainsbury’s plans to equip its store colleagues with real-time data and insights to streamline in-store processes such as shelf replenishment. Leveraging multiple data inputs, including shelf-edge cameras, AI technology will guide colleagues on prioritizing restocking activities, thereby improving efficiency and productivity.
Over the next five years, Sainsbury’s will deploy Microsoft Azure to implement these initiatives, integrating data assets with Microsoft 365 collaboration tools to drive innovation and operational excellence.
Clodagh Moriarty, Chief Retail and Technology Officer at Sainsbury’s, expressed confidence in the collaboration with Microsoft, emphasizing its role in accelerating the supermarket’s ambition to become the UK’s leading AI-enabled grocer. Moriarty highlighted the strategic investment in transformative capabilities, aimed at enhancing efficiency, productivity, and customer service while delivering value to shareholders.
Clare Barclay, CEO of Microsoft UK, commended Sainsbury’s visionary approach, noting its commitment to placing AI at the forefront of its business strategy. Barclay expressed enthusiasm for the collaboration, emphasizing its potential to revolutionize the retail experience for both customers and store colleagues.
The partnership between Sainsbury’s and Microsoft signifies a significant step towards ushering in the next generation of retail, powered by innovative AI-driven solutions.
Source: technologyrecord.com
The post Sainsbury’s aims to be an ‘AI-enabled grocer’ with Microsoft AI technology appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Researchers build AI-driven sarcasm detector

Published

on

researchers-build-ai-driven-sarcasm-detector

 

Artificial intelligence has made remarkable strides, from passing bar exams to reading bedtime stories with emotion. Yet, despite these feats, it still falls short of matching the intricate nuances of human communication—particularly, the art of sarcasm.
However, researchers in the Netherlands are determined to change that narrative. They have developed an AI-driven sarcasm detector that can discern when sarcasm is being used, a feat previously thought to be exclusive to human cognition.
Matt Coler, from the University of Groningen’s speech technology lab, expresses excitement about the project’s progress. He emphasizes the importance of understanding sarcasm, a pervasive aspect of human discourse, to facilitate seamless communication between humans and machines.
Recognizing sarcasm poses challenges due to its subtlety, especially in text-based interactions where cues like tone and facial expressions are absent. To overcome this, researchers trained their AI using a combination of text, audio, and emotional content from popular sitcoms like Friends and The Big Bang Theory.
The AI, trained on annotated data from these shows, demonstrated an impressive ability to detect sarcasm in unlabelled exchanges from the sitcoms, achieving an accuracy rate of nearly 75%. Further enhancements are underway, including incorporating visual cues like eyebrow movements and smirks, to improve accuracy even more.
Beyond enhancing interactions with AI assistants, this technology holds potential for detecting negative language and identifying instances of abuse or hate speech. However, as AI becomes more adept at understanding sarcasm, questions arise about its potential to wield sarcasm itself.
Coler muses about the implications of machines responding with sarcasm, raising concerns about clarity in communication. Nonetheless, advancements in AI-driven sarcasm detection offer promising prospects for improving human-machine interactions and bridging the gap between artificial and human intelligence.
Source: theguardian.com

The post Researchers build AI-driven sarcasm detector appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

AI, bias and experiments: how Women in News is tackling tech’s inbuilt stereotypes

Published

on

ai,-bias-and-experiments:-how-women-in-news-is-tackling-tech’s-inbuilt-stereotypes

 

Issues surrounding bias in AI are deeply rooted in the accuracy, trustworthiness, and quality of data, which, if overlooked, can significantly skew outcomes. Lyndsey Jones, an AI author and transformation coach, delves into these concerns, offering valuable insights for newsrooms on monitoring and reviewing data.
Madhumita Murgia, an AI journalist and the first artificial intelligence editor of the Financial Times, sheds light on how women, migrants, precarious workers, and minority groups are disproportionately affected by the technical limitations of Generative AI. Murgia emphasizes the lack of representation of these groups in the development process of AI technologies, highlighting the need for inclusive participation.
WAN-IFRA Women In News workshops on the Age of AI in the newsroom have brought bias effects to the forefront. Through the Digital ABCs training program, media professionals are equipped with skills to navigate the digital landscape and drive organizational change.
A newly launched module focuses on AI, with over 100 participants in eastern Europe taking part, now extended to journalists in parts of Africa, the Middle East, and Southeast Asia. Instances of bias surfaced during the training, such as generating offensive avatars and misinterpretation of accents in AI tools.
Google CEO Sundar Pichai’s acknowledgment of biased AI tools reflects ongoing concerns in the industry. Timnet Gebru’s dismissal from Google for highlighting biases further underscores the need for vigilance in addressing these issues.
Diverse teams in WIN’s Age of AI program are experimenting with various tools like fact-checking and enhancing staff skill sets in AI usage. Projects under consideration for further EU funding include a video lab for content amplification and an AI avatar for journalist safety.
Media companies must ensure diverse staff collaboration when testing AI tools. Quotas for women in AI research and cross-border partnerships may be necessary for smaller media groups to compete effectively.
Journalists can take steps to improve content quality by examining storytelling practices and ensuring diversity in sources and representation. Consistency of data collection across departments and assessing biases in data sets are crucial for ethical AI usage in journalism. Ultimately, AI tools should be used to enhance journalism’s quality and integrity, rather than generating clickbait or misinformation.
Source: wan-ifra.org

The post AI, bias and experiments: how Women in News is tackling tech’s inbuilt stereotypes appeared first on HIPTHER Alerts.

Continue Reading

Trending