Google’s Privacy Sandbox Faces Scrutiny Over User Tracking Allegations
Overview
Google’s Privacy Sandbox was initially designed to replace third-party cookies in Chrome. It was a more privacy-conscious solution, but the Austrian privacy group Noyb is now criticizing it. They claim that, under the guise of privacy enhancement, Sandbox allows Google to track users within the browser itself- first-party ad tracking.
Impact
Noyb’s main concern is that Sandbox may be less invasive in comparison, but it still doesn’t fully eliminate user tracking and violates data protection laws. They also criticize Google’s consent mechanisms for not being fully transparent or fair, and they may be illegal under EU regulations.
That said, the ongoing use of Sandbox continues despite adjustments in response to regulatory feedback. Phased testing and a gradual deprecation of third-party cookies are also planned.
Recommendation
Users should stay informed about the Privacy Sandbox developments and be aware of the consent they are providing. That said, Organizations relying on Chrome, especially for digital advertising and data analytics, should prepare for changes in cookie handling. More importantly, regulatory bodies should monitor the implementation of these technologies to ensure genuine privacy enhancements.
Critical Security Vulnerabilities Uncovered in ZKTeco Biometric Systems
Overview
Kaspersky’s search reveals 24 critical vulnerabilities in ZKTeco’s biometric systems. These flaws range across multiple types, including SQL injections, buffer overflows, command injections, arbitrary file operations, etc. The flaws could allow unauthorized access, data theft, and potentially allow hackers to deploy malicious software.
Impact
Attackers could bypass biometric verifications using things like manipulated user data or counterfeit QR codes, and potentially commit access violations. Biometric data is at risk of being stolen and sold on dark networks. Remote manipulation of devices can also lead to execution of arbitrary code and system config alterations, all of which can lead to installation of backdoors.
Potential Risks
CVE-2023-3938: SQL injection via QR code scanning could authenticate unauthorized access.
CVE-2023-3939: Command injection flaws may allow the execution of OS commands with elevated privileges.
CVE-2023-3940 and CVE-2023-3941: Flaws allowing arbitrary file reads and writes could enable access to sensitive data.
CVE-2023-3942 and CVE-2023-3943: Additional SQL injections and buffer overflows could permit database manipulations.
Recommendation
Network Segmentation: Isolate biometric readers in separate network segments to limit breach impacts.
Strong Access Controls: Employ robust admin PWs and enhance security configurations.
Reduced QR Code Use: Minimize reliance on QR codes for authentication.
Comprehensive Security Assessment: Conduct thorough security checks and biometric system audits.
North Korean Phishing Campaigns Target Brazilian Fintech Sector
Overview
Google’s Mandiant and Threat Analysis Group (TAG) highlights a surge in phishing attacks conducted by North Korean operatives. The attacks target Brazil’s financial technology and cryptocurrency sectors. Their phishing tactics have been active since 2020, committed by multiple North Korean groups, notably UNC4899.
Impact
The primary targets are govt. agencies, aerospace, technology, and specifically, Brazil’s fintech and cryptocurrency firms. They use social engineering to initiate contact through social media, presenting fraudulent job opportunities with well-known firms to lure targets.
They escalate to the distribution of trojanized apps, leading to potential system control. Other campaigns involve masquerading as recruiters to distribute malware like AGAMEMNON, which is a downloader for further exploits. Also, groups like PAEKTUSAN have impersonated HR personnel to infiltrate aerospace firms. PRONTO also targeted diplomats with decoy emails.
Recommendation
Be vigilant in communication, especially unsolicited job offers. Use enhanced security protocols to implement stringent security measures i.e. regular system audits, updated antivirus, anti-phishing tools, comprehensive employee training, etc. Isolate critical networks from general network access. Most importantly, ensure compliance with international cybersecurity regulations.
Microsoft Postpones Launch of AI-Powered Recall Feature Over Security Concerns
Overview
Microsoft announces a delay in releasing the AI-powered “Recall” for its Copilot+ PCs. It was initially to be released on June 18, 2024. The decision follows intense scrutiny and backlash over security risks associated with the feature. Recall can capture every action on a user’s PC and create a searchable database powered by Microsoft’s on-device AI.
Impact
Privacy Concerns: Critics argue that Recall’s ability to store screenshots presents privacy issues.
Security Risks: Its comprehensive data collection makes it an attractive target for cybercriminals.
Public Backlash: Microsoft is being criticized for its initial secretive approach to developing the feature, which did not include testing in the Windows Insider Program.
Regulatory Attention: The feature’s potential implications have caught the attention of regulators.
Next Up
Microsoft plans to transition the rollout of Recall to the Windows Insider Program to harness community expertise. In response to the backlash, Recall will be made an opt-in feature. That said, users must authenticate through Windows Hello to access the indexed data.
“Sleepy Pickle”: A New Threat to Machine Learning Model Integrity
Overview
Sleepy Pickle is a novel attack technique targeting the integrity of ML models through the widely-used Pickle serialization format. It exploits the deserialization process to execute arbitrary code to ultimately tamper with ML models’ output. Trail of Bits developed this method with significant risk to the security of ML supply chains.
Impact
Sleepy Pickle can modify ML models by injecting malicious payloads into Pickle files. The payloads are delivered through various methods i.e. adversary-in-the-middle attacks, phishing, supply chain compromise, system vulnerabilities, etc. Attackers can manipulate model outputs and generate harmful recommendations. The worst thing is, with its broader threat landscape, it does not even require direct interaction with the target.
Recommendation
Only load ML models from trusted sources. Prefer formats like TensorFlow or Jax that support safer deserialization methods.
That said, implement signed commits to verify the integrity of files before loading them. Also, educate stakeholders about the potential risks associated with loading serialized models. Conduct frequent security audits of ML models and their supply chains to mitigate vulnerabilities.