VPNs Are Not Foolproof on Suspicious Wi-Fi
A team of researchers at Leviathan Security has outlined a method whereby an attacker with a device on a local network could capture all the internet traffic to and from a specific target device on the same network, even if that device is employing a VPN. The method requires that the attacker operate on the same physical local area network, such as shared or public Wi-Fi, but does not necessarily require admin privileges. Labeled TunnelVision, the method affects every VPN software and provider, effectively bypassing the encrypted channel of the VPN and routing the unencrypted network traffic through the attacker’s computer without alerting the user that their VPN has been compromised. While some mitigations can be implemented on Linux, for Mac and Windows devices there are limited mitigation options.
This attack requires that an attacker sharing access to a local network chooses to victimize a specific computer. In other words, it can’t be deployed at scale to compromise the activity of everyone on the free Wi-Fi network at Chicago O'Hare International Airport. But, it can be used by a coffee shop or hotel employee to snoop on all the internet activity of a specific patron even if that patron is using a VPN.
The Bottom Line: We have historically recommended that you avoid public Wi-Fi (even with a VPN) and instead use your iPhone’s hot-spot. That advice is still good and will protect you here as well: you don’t need a VPN on your hotspot and TunnelVision does not apply. We have also said before that if you must use public Wi-Fi then you should use a VPN while doing so, but TunnelVision is one method whereby an attacker could snoop on your internet connection under those conditions. Even so, I tend to think our advice still holds—locks are still a good idea even though lockpicks exist. If you’re using a VPN on public Wi-Fi, an attacker would need to know about TunnelVision and also target your device to bypass the VPN. So it is still better to have the VPN turned on than not if you’re going to be on public Wi-Fi. But, it’s even better to not use public Wi-Fi at all, that way you’ll never need the VPN.
Apple, Google & Microsoft All in Hot Water over Generative AI Features
Generative AI only hit the mainstream in 2023, and it’s so fresh a technology that we haven’t established safe norms or patterns for its use. There is perhaps no finer illustration of this point than the big three tech companies all landing themselves in controversy in the past month alone over specific proposed implementations of generative AI. In my opinion, the difference between their three ideas serves to demonstrate an enormous gap between the cultures and values of the three companies. We don’t usually report on Android or Windows problems, but the contrast is remarkable.
Google’s Proposed New AI Feature for Android: How Would You Like Us to Listen In on All Your Calls (and Warn You If They Are Scams)?
Google is jumping on the AI bandwagon by incorporating a generative AI model called Gemini into its latest version of Android, reports Arstechnica. At Google I/O, the company’s annual developer conference, it demonstrated how Gemini will be able to listen in on your phone calls and alert you to potential scams. In the demo, the caller began telling the user that there were fraudulent charges on their account and that they needed to move the money to a new account. Gemini immediately popped up with a warning that the caller was likely trying to scam the user. This sounds like a helpful feature in theory, and Google claims the processing is all done on-device without contacting their servers, but even operations done on the device could potentially be intercepted by malware, so listening in on every call seems like an egregious and unnecessary privacy risk from a company whose entire business model depends on harvesting user’s activity patterns.
The Bottom Line: If you’re already using an iPhone, this isn’t something you need to worry about since this is an Android feature. However, if I were an Android user, I would definitely be disabling Gemini.
Microsoft’s New AI Feature for Windows: How about We Take Pictures of Everything You Do on Your Device All the Time Forever
Microsoft is bringing a new, AI-powered feature to Windows 11 called Recall, available now on any Windows 11 device with the Copilot+ PC features. Recall is designed to allow you to “retrace your steps” to content that you previously interacted with on your PC. For example, you could use the Windows search bar to search for “brown bag” and the search engine would surface the web page you’d visited last week to look at leather purses, even though the words “brown” and “bag” do not appear anywhere on the page. The problem? Cybersecurity researcher Kevin Beaumont explains: Recall works by taking a snapshot of your screen every five seconds, having generative AI summarize the screenshot, and then saving the shot and the summary to local memory. This includes screenshots of sensitive content like banking account information, passwords, social security numbers, and all other private information you encounter on your device both on the web and off. No sensitive information is redacted or excluded from Recall screenshots, so if any screenshots contain sensitive data, it would be no different than storing passwords in a plain text file. While these screenshots are stored and analyzed on the device, they are still accessible to anyone using the computer, and likely to malware running on that device as well. It is functionally equivalent to having spyware installed on every Windows 11 Copilot+ computer by the manufacturer.
The Bottom Line: If you have a Copilot+ PC running Windows 11, I’d recommend disabling Recall right away. You can do that by going into your computer’s Privacy & Security settings, selecting Recall & Snapshots, and turning off the toggle for Save Snapshots. And while you’re at it, click Delete All to remove any snapshots that are already on your device. Personally, Rhett and I will both be staying on Windows 10 until at least October 14, 2025 when Microsoft has announced they will stop providing security updates.
Apple’s Rumored New AI Feature for Safari: User-Controlled Ad Blocking
Okay now Apple’s turn. With the next operating system updates, due out in September, rumor has it that Apple will make blocking ads in Safari easier than ever. The next version of Safari will include a feature called “Web Eraser,” which will reportedly allow users to completely remove any part of a web page, including ads, reports Apple Insider. When using Web Eraser, any changes you make will persist across browsing sessions, so you only have to erase ads on a website once and they’ll stay erased when you reload the page. Some British newspaper groups, such as the News Media Association, have raised concerns about this new feature and how it might affect the financial viability of the journalism industry. Many journalism websites rely on ad revenue, and integrating this functionality natively into Safari means ad blocking will be quick and easy, which means a loss of revenue for sites that rely on ads.
The Bottom Line: Malicious advertising is a scourge on the internet, and ad blocking can be an important piece of staying private and secure. Safari’s new Web Eraser tool seems like a convenient way to block intrusive ads, but that also means that every time you visit a site where you used Web Eraser to remove ads, that site will continually miss out on ad revenue. Subscription models might offset the loss in advertising revenue, but that would lead to more paywalls. The debate is nuanced, and as a small publisher it’s likely to affect iPhone Life one way or another.
Compared with the complaints levied against Google and Microsoft, whose proposed features make egregious breaches of privacy in the name of providing a service nobody asked for, Apple’s idea actually addresses a problem that we have and does so with a clear consciousness of the sensibilities and needs of their users.
Pig Butchering Crypto-Scams Continue, Stealing Billions
Romance “pig butchering” crypto scams based out of Myanmar, Laos, and Thailand now account for roughly 64 billion dollars in theft per year, estimates the United States Institute for Peace. The scams originate from massive prison-like call centers that function on slave labor, with the United Nations estimating about 230,000 people in a condition of bondage where they cannot leave and are made to scam innocent victims across China, Europe, and the USA or else suffer abuse or even death. We have covered the styles and mechanics of Pig Butchering scams in previous editions, but in brief, they may take the form of an online friendship or romance which can continue for months or even years. Eventually the scammer will mention that they have been making good money on cryptocurrency and offer to help you invest to also make money. Should you take their advice and invest, they will take your money, and encourage you to invest more and more, which they will also take.
The Bottom Line: Protecting yourself and your loved ones from these prolific and horrific scams boils down to never acting on investment advice from any online contact, especially where cryptocurrency is involved. You can double check that a new online acquaintance is a real person, not a scammer with a false identity, by insisting on a video call.