Connect with us
MARE BALTICUM Gaming & TECH Summit 2024

Uncategorized

Appier powers Be Group’s growth in Vietnam’s competitive multi-service app industry

Published

on

appier-powers-be-group’s-growth-in-vietnam’s-competitive-multi-service-app-industry

 
Appier has joined forces with Be Group, a prominent player in Vietnam’s multi-service application sector, to help drive the Vietnamese brand’s business growth and user engagement. Be Group, known for its Be app that offers services like ride-hailing and food and grocery deliveries, marked significant milestones in boosting market share and user loyalty through this innovative partnership.
Established in 2018, Be Group has rapidly emerged as a critical player in Vietnam’s multi-service application industry, holding a substantial 25% market share. The company aimed to capitalize on the exponential growth of iPhone users in Vietnam, seizing the opportunity to maximize app revenue effectively. Two crucial challenges and opportunities emerged:

Expanding the user base: Be Group sought to acquire new app users, especially those with a high intention of making their first ride booking. The exponential growth of iOS users expanded the potential user base, making precise audience targeting more challenging.
Fostering and retaining user loyalty: Engaging dormant users, particularly those who had not made orders in more than 30 days, was vital for building user habits and loyalty.

Appier, with its expertise in AI, employed a two-pronged approach to address Be Group’s challenges:

New User Acquisition: Appier harnessed first-party data from Be App using deep learning and machine learning. This allowed for precise audience segmentation, enabling the acquisition of users with a solid intention to book their first ride. The strategy incorporated Apple Search Ads utilizing Appier’s AIXPERT AI-Assistant automation tool and programmatic.
Nurturing User Loyalty: AI analyzed first-party mobile data from the Be app to identify dormant users and predict their likelihood of making another booking. Predictive analytics helped Appier anticipate the cost of the User Acquisition and Return on Ad Spend (ROAS).

Be Group achieved remarkable results through this partnership. Notably, Appier’s AI-driven solutions helped Be Group successfully reactivate millions of dormant users, re-engaging them to make repeated bookings.
“We are thrilled with the outcomes of our partnership with Appier. Their exceptional AI-powered solutions have driven growth, engaged our users, and bolstered our market position. Appier’s innovative approach to user acquisition and re-engagement has proven invaluable in achieving these impressive results, highlighting the strength of our collaboration as we work together to shape the future of our industry,” said a spokesperson from Be Group.
This collaboration was centered on leveraging 1st party data, deep learning, and extensive insights into user behavior to achieve the campaign’s ultimate goal: acquiring new users with booking behavior and bringing back dormant users. This holistic approach saw a sixfold increase in Apple Search channel volume.
Both companies are excited about the possibilities this partnership has opened up. The results are reflected in the numbers, user habits, and loyalty Be Group has successfully instilled among its customers.
With the industry expected to continue evolving, the partnership between Appier and Be Group stands as a testament to the power of AI and data-driven insights in driving business growth and creating loyal customer bases. Both companies look forward to future endeavors as they aim to remain at the forefront of Vietnam’s thriving multi-service application industry.
The post Appier powers Be Group’s growth in Vietnam’s competitive multi-service app industry appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Microsoft and Google’s new AI sales pitches: We’re your last line of defense against your scatterbrained self

Published

on

microsoft-and-google’s-new-ai-sales-pitches:-we’re-your-last-line-of-defense-against-your-scatterbrained-self

 

Can You Trust On-Device AI with Your Privacy?
Google and Microsoft have recently unveiled their visions for artificial intelligence, with a particular focus on on-device AI, also known as offline or cloud-free AI. This concept, while intriguing, raises important questions about privacy and security.
Both companies are essentially asking: Would you be comfortable with an AI tool installed on your device, operating solely offline, providing real-time assistance akin to having a trusted human advisor by your side?
Microsoft’s recent introduction of “Copilot+” PCs featuring enhanced on-device processing, powered by Qualcomm’s Snapdragon X processor, highlights this trend. Among the innovations is a feature called “Recall,” designed to retrieve past digital interactions effortlessly. With Recall, users can access virtually anything they have ever seen on their PC, akin to having a photographic memory. However, Recall’s comprehensive data collection raises concerns about privacy and potential misuse.
Google’s Android platform has also embraced on-device AI with its Gemini Nano AI layer. This technology promises faster performance while safeguarding user privacy. During a demonstration, Google showcased Gemini Nano’s ability to detect and flag potential scam calls in real-time, directly on the user’s device. However, the lack of clarity regarding data collection and retention policies has sparked skepticism.
While both Microsoft and Google tout the privacy benefits of on-device AI, critics warn of potential risks. The comprehensive data collected by these AI tools could become targets for exploitation by malicious actors or governments. Moreover, the lack of transparency around trigger mechanisms and default settings raises concerns about user consent and control over their data.
In conclusion, while on-device AI offers convenience and privacy benefits, it also poses significant privacy and security risks. As technology continues to advance, it is crucial for companies to prioritize transparency, user consent, and data security to build trust among users.
Source: fastcompany.com

The post Microsoft and Google’s new AI sales pitches: We’re your last line of defense against your scatterbrained self appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Zornio: Colorado’s new artificial intelligence law is a game changer. Here’s why.

Published

on

zornio:-colorado’s-new-artificial-intelligence-law-is-a-game-changer-here’s-why.

 

On Friday, Governor Jared Polis reluctantly signed Senate Bill 205 into law, marking a significant step in regulating high-risk artificial intelligence (AI) to combat discrimination—a first of its kind initiative that promises to reshape industries with Colorado leading the charge.
For consumers, this development heralds positive change. While AI offers vast potential to enhance decision-making processes, it also carries the risk of producing discriminatory outcomes when implemented poorly. Colorado’s new law aims to mitigate these risks and minimize the potential harm caused by biased algorithms.
Instances of AI discrimination span various sectors, including healthcare and hiring practices. Studies have highlighted gender and racial biases in healthcare algorithms, resulting in unequal treatment for marginalized groups. Similarly, AI-driven hiring processes have perpetuated gender disparities and excluded candidates with disabilities.
Legal contexts have also witnessed cases of algorithmic bias, such as faulty facial recognition leading to false accusations and discriminatory mortgage approvals. Such systemic discrimination underscores the urgent need for oversight in AI deployment, a key objective of Colorado’s legislation.
While AI holds promise, its efficacy hinges on accurate data inputs and unbiased design. Biases present in real-world datasets can perpetuate discriminatory outcomes when integrated into AI systems. Thus, robust oversight mechanisms are essential to address inadvertent biases and ensure fairness in AI applications.
Consider a scenario where a Fortune 500 company employs AI for applicant screening. If the algorithm is trained on historical data reflecting biases towards older, wealthier white men in CEO appointments, it may inadvertently perpetuate these biases in candidate selection. Colorado’s legislation seeks to address such inadvertent biases by providing oversight, irrespective of their intent.
In essence, Colorado’s pioneering AI legislation represents a significant stride towards fostering fairness and equity in AI deployment. By adding guardrails to AI systems, the state sets a precedent for responsible governance, positioning itself at the forefront of ethical technological innovation.
Source: coloradosun.com

The post Zornio: Colorado’s new artificial intelligence law is a game changer. Here’s why. appeared first on HIPTHER Alerts.

Continue Reading

Uncategorized

Gallagher Updates Regulation for Artificial Intelligence

Published

on

gallagher-updates-regulation-for-artificial-intelligence

 
Gallagher’s Cyber practice maintains a sharp focus on emerging technologies and the associated risks as organizations adopt them. In 2024, our attention is centered on evolving compliance requirements related to artificial intelligence (AI). Recent proposals for AI-specific regulations at the state, federal, and international levels are of particular interest. This summary serves as an update to our Q1 summary, “The Latest Regulation for Artificial Intelligence,” highlighting important developments.
State Regulation:
Currently, 17 states have introduced legislation aimed at regulating AI: California, Colorado, Connecticut, Delaware, Illinois, Indiana, Iowa, Louisiana, Maryland, Montana, New York, Oregon, Tennessee, Texas, Vermont, Virginia, and Washington.

Four states emphasize interdisciplinary collaboration: Illinois, New York, Texas, and Vermont.
Four states prioritize protection from unsafe or ineffective systems: California, Connecticut, Louisiana, and Vermont.
Eleven states focus on safeguarding against abusive data practices: California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, and Virginia.
Three states — California, Illinois, Maryland — and New York City focus on transparency.
Three states concentrate on protection from discrimination: California, Colorado, and Illinois.
Twelve states emphasize accountability: California, Colorado, Connecticut, Delaware, Indiana, Iowa, Montana, Oregon, Tennessee, Texas, Virginia, and Washington.

Federal and Industry Sector Regulation:
On March 27, 2024, the US Department of Treasury published a report titled “Managing Artificial Intelligence-Specific Cybersecurity Risks in the Financial Services Sector.” This report offers recommendations to financial institutions for utilizing AI technologies securely and effectively while mitigating operational risks, cybersecurity threats, and fraud challenges. Key recommendations include addressing capability gaps, regulatory coordination, and enhancing the National Institute of Standards and Technology (NIST) AI risk management framework.
On March 28, 2024, the US Office of Management and Budget issued a memorandum mandating government agencies to appoint chief AI officers (CAIOs). These officers will be responsible for promoting AI innovation, coordinating agency AI usage, managing associated risks, and expanding reporting on AI use cases.
Global Regulation:
On March 13, 2024, the European Union passed the Artificial Intelligence (AI) Act, aiming to establish a comprehensive legal framework for AI use worldwide. The act aims to foster trustworthy AI by ensuring adherence to fundamental rights, safety, and ethical principles while addressing risks associated with impactful AI models.
The key points from the AI Act include the following.
Risk Classification:
The AI Act classifies AI systems based on risk:

Unacceptable risk: Certain AI systems (e.g., social scoring systems and manipulative AI) are prohibited.
High-risk AI systems: These systems are regulated and subject to extensive obligations. Providers (i.e., developers) of high-risk AI systems must comply with requirements related to transparency, safety, and accountability.
Limited risk AI systems: These systems — including chatbots and deepfakes — are subject to lighter transparency obligations, as long as users are aware the content is AI generated.
Minimal risk AI systems: Systems such as AI-enabled video games and spam filters remain unregulated.

Most obligations fall on providers of high-risk AI systems intending to use the systems within the EU or use their output within the EU.
General-Purpose AI:

All general-purpose AI (GPAI) model providers are required comply with the terms of the Directive on Copyright in the Digital Single Market (also called the Copyright Directive). They’re also required to educate users with instructions to use the platform with written documentation on technical terms.
Any GPAI models that present a systemic risk have a mandate to conduct model evaluations and adversarial testing, document and report incidents considered serious, and take steps to implement cybersecurity controls.

Prohibited AI Systems:
The AI Act prohibits certain types of AI systems:

Those deploying subliminal, manipulative, or deceptive techniques to distort behavior and impair informed decision-making, causing significant harm.
Those exploiting vulnerabilities related to age, disability, or socio-economic circumstances to distort behavior, causing significant harm.
Biometric categorization systems inferring sensitive attributes (e.g., race, political opinions, sexual orientation), except for specific lawful purposes.

Deployers of AI systems:
Deployers of high-risk AI systems have obligations, though less than providers. This applies to deployers located in the EU and third-country deployers where the AI system’s output is used in the EU.
Risk Management Strategies:
Organizations affected by these new AI compliance requirements should communicate them to key stakeholders and consider leveraging Cyber insurance policies that offer regulatory compliance guidance. It’s essential to embed a formal risk management plan for AI usage into overall enterprise risk management programs and coordinate efforts between various stakeholders.
In summary, today’s regulation around AI is cutting across multiple industry sectors and jurisdictions — including financial services, healthcare, technology, education, real estate, and municipalities — and will undoubtedly spread to others in short order. Any organization considering embracing generative AI tools should consider embedding a formal risk management plan for AI usage into their overall enterprise risk management program. A cross-divisional effort between several key stakeholders will be required. Risk managers should look to coordinate efforts between legal, compliance, human resources, operations, IT, marketing, and others while closely monitoring emerging risks as AI systems become more widely used.
Source: insurancejournal.com
The post Gallagher Updates Regulation for Artificial Intelligence appeared first on HIPTHER Alerts.

Continue Reading

Trending