- šļøā Your Security Checklist
- ššļø Test Your Security Skills
- š° Your Weekly Security Update
- 𤨠This Should Be on Your Radar š”
- š Security Fail of the Week š
- šš± Security Updates from Apple š
If you take nothing else from this newsletter, just do these three things to protect yourself:
- Set up a recovery key for your Apple Account. A recovery key is a unique 28-character code that can be used to recover your Apple Account if you ever lose access (e.g., if you forget your password).
- Use Sign In with Apple if you can.Ā When you sign in using your Apple Account, you can choose to hide your email address and sign in using your iPhoneās Face ID rather than a password.
- Learn to spot scam texts.Ā It is common to receive texts about undeliverable USPS packages, unpaid tolls, or even texts from wrong numbers. These are all popular scam text tactics.
What should you do in the following scenario?
Youāve made a new contact online who seems friendly and helpful, and who has agreed to do a video call so you can verify each other's identities. Which of the following is NOT a sign of deepfake live video to look out for while youāre in FaceTime? š¤Ā
- Glitchy, low-resolution video
- Only their mouth moves, the rest of them is still
- When they pass their hand in front of their face, their face shifts and glitches
- Audio synchronization issues: lips move but sound comes later
- They donāt show their face at all
- Connection issues prevent the call from going through and you have to reschedule
- None of the above
Scroll to the bottom to see how you did!
Artificial intelligence has rapidly grown more sophisticated over the past few years, and the potential for misuse is a frightening concept. Scammers and hackers are now using AI to trick their victims into sending money or handing over credentials, so the idea of using AI to build and deploy malware doesnāt sound like such an impossibility. Thankfully, we donāt have to wonder, thanks to researchers at Lass Security. The researchers tested the most popular LLMs, including ChatGPT, Gemini, and Claude, to see if they could become the next generation of cybercriminals. The experiment found that none of the current generative AI models could successfully compile effective malware. For more details on the results of the experiment, check out the full story at WebProNews.
The Bottom Line: Thankfully, generative AI is far from self-aware and is incapable of acting on its own. It canāt create and deploy malware, but it is still a dangerous tool for cybercriminals, who can use it, for example, to impersonate the voices of loved ones or write convincing phishing emails. Continue to stay cautious of suspicious emails or phone calls from loved ones asking for money, and always make sure youāre downloading files from reputable sources.
Video Call Glitches Are the New Uncanny Valley, with Real Consequences
Weāve often recommended that the best way to verify the identity of new contacts online is with a video call. Live video is hard to fake in part because human brains are highly practiced at detecting deception in the minute details of human facial features and emotions. Live video can be faked with advanced tools and real-time rendering, but so far, the results are less than perfect. Real-time fake video tends to be glitchy and low-resolution, with lip-synchronization issuesāin short, it often looks like a bad connection. Itās still a good idea to treat a glitchy internet connection as a potential red flag, but new research suggests you have to be very careful in judging someone based on a glitchy connection, because a negative bias may be hardwired into our brains. Researchers at the Cornell SC Johnson College of Business have been studying video calls, and they noticed that people who experience connection issues are perceived negatively, even when itās not their fault. A glitchy internet connection during a job interview made the candidate much less likely to get the job, and connectivity problems during an online court hearing seem to increase the likelihood of a bad outcome for that person. Read more at the researchersā blog.
The Bottom Line: If you are planning to verify someoneās identity with a video call (an important security practice with new contacts online), make sure you have a good, stable connection. Cloudflareās free internet speed test also offers a metric called jitter to measure your connectionās stability over time. Itās a good idea to make sure your connection is strong, fast, and stable before any important call. Ask the other person to do the same. While a glitchy connection can be evidence of tampering, itās important to consider other factors, and not judge too hastily.
Apple Offers Developers Age Restriction Tools in Wake of Australiaās Social Media Ban
Australia has officially banned social media for anyone under the age of 16, meaning all platforms must remove underage users as soon as possible or risk facing hefty fines. Apple is helping make this transition a bit easier for developers with what it is calling the Declared Age Range Application Programming Interface (API). This API allows developers to set an age range for their apps, preventing underage users from accessing them. You can find out more about Appleās age restriction tools at MacRumors.
The Bottom Line: If youāre an iOS developer, we hope Appleās tools will help make implementing age restrictions into your apps as painless as possible. If youāre a social media user in Australia and over the age of 16, nothing should change too much for you.
Electronic Frontier Foundation Fights Back Against Age Verification Laws
Age verification is becoming more common, with the UK and several US states passing laws requiring websites to verify usersā ages, and now Australiaās social media ban. Age verification laws sound like a good idea on the surface, but when put into practice usually mean using privacy-invasive techniques to verify your age. That could mean uploading a copy of your driverās license or submitting to a facial scan. The Electronic Frontier Foundation has created a resource hub to help the public better understand age verification laws and how they can affect everyone.
The Bottom Line: Age verification laws are, unfortunately, inherently invasive. Entrusting a private company with a scan of your driverās license or your face means that if the company is ever breached, that data will be in the hands of cybercriminals. We recommend checking out the EFF resource hub linked above for more information.
Activist Arrested for Wiping His Phone
An Atlanta-based activist, Samuel Tunick, has been arrested and charged for erasing his phone. United States Customs and Border Protection (CBP) will commonly search the phones of people entering the country, regardless of citizenship, especially if those people are activists, journalists, or other high-profile individuals. In fact, CBP could soon require five years of social media history for anyone entering the country. Regardless of what CBP requires of those crossing the border, it is not a crime to wipe your cell phone. Read more details about the story at 404 Media.
The Bottom Line: CBP searching your phone is invasive and violates your privacy, but we would not recommend wiping your phone. Not only is it a tedious process for you, since youāll need to restore a backup later, but it also makes you look more suspicious to border officials and could result in retaliation like weāre seeing with Samuel Tunick. If you have sensitive data that you would not want CBP to see, we recommend not keeping it on your phone and instead uploading it to a secure cloud server.
Germany Believes Russia Is Behind 2024 Cyberattack
Last year, German air traffic control was targeted by a cyberattack that affected its office communications but did not disrupt flights. Now, Germany believes Russia was behind the attack. More specifically, Germany is attributing the attack to the hacker group Fancy Bear, which is believed to have ties to Russian military intelligence. Russia has denied responsibility for the cyberattack. You can check out the full story at the BBC.
Nonprofit Wants to Unlock Abandoned Devices
A nonprofit called Fulu is reducing e-waste by unlocking devices that have been discontinued by manufacturers. For example, this past October, Google stopped pushing updates for its first and second-generation Nest smart thermostats, rendering them obsolete. Fuluās goal would be to unlock devices like this so that they can continue to be used even when the manufacturer no longer supports them. The organization has a bug bounty program that pays for security flaws that can be exploited to unlock devices. You can read up about Fulu at TechSpot.
The Bottom Line: If youāre still holding onto your old tech devices, you may soon have a solution to get them working again. We like to think of this as an ethical form of hacking.
700Credit Breached: Nearly 6 Million People Affected
A credit firm, 700Credit, is the latest target for cyberattack. 700Credit is primarily used by auto dealerships to run credit checks on customers and handle vehicle financing. The company was breached this past October. Data exposed by the attack included names, social security numbers, birthdates, and addresses. Due to 700Creditās widespread use across the US, more than 5.8 million people have likely been affected. You can read more at Cyber Insider.
The Bottom Line: If you were affected by this data breach, you should receive a notification from 700Credit soon. As always, we recommend freezing your credit, even if you have not been impacted by this breach. Keeping your credit frozen prevents others from opening lines of credit in your name, and it can be unfrozen at any time should you need to open one yourself.
Consumer VPN Caught Snooping on AI Conversations for 8 Million Users
A VPN called Urban VPN and its associated suite of privacy tools, including an ad blocker, all of which are marketed on the Google Chrome store, have updated their code to steal AI agent conversations and sell them to third parties. The code was updated in July of 2025, and a new function was added that monitors the userās browsing activity to detect when an AI agent, such as ChatGPT or Claude, is opened in a tab. The code then captures the messages sent and received in that tab and exfiltrates them for sale to third parties āfor marketing analytics purposes.ā Urban VPNās collection of apps has around 8 million users. This exploitation was discovered by researchers at KOI. Read more at KOIās blog.
The Bottom Line: Web browser extensions require extraordinary access in order to provide any utility. This VPN extension reads every page looking for the kind of data it wants to steal, and then invisibly modifies those web pages to steal that data. This is an illustrative example of the power a browser extension has to compromise your information. We recommend avoiding browser extensions except for your password manager and ad blocker. As we mentioned last week, we also recommend choosing your VPN with great care, since you are extending great trust.
Texas Accuses 5 TV Brands of Spying: How to Disable
The Texas Attorney General has accused five TV brands of spying on Texans using a technology called Automatic Content Recognition (ACR). The five brands are Sony, Samsung, LG, Hisense, and TCL. ACR technology captures a still image of whatever is displayed on the screen every few seconds and uploads those images to company servers, where they are analyzed to help target advertising. ACR allows the company to track what viewers watch on their TVs, regardless of whether they use an app, smart TV, Apple TV, or game system: anything on the TV is captured and analyzed. Texas Attorney General Ken Paxton believes this to be illegal surveillance under Texas law.Ā
The Bottom Line: ACR is on by default, but you can turn it off. There is no advantage to leaving it on, so if you own a smart TV from one of these five brands, then check out ZDnetās article summarizing how to turn off ACR.
LG TVs Now Come with Copilot AI
Owners of LG smart TVs may be surprised to find that their TVs now have a dedicated Microsoft Copilot app. The app was reportedly installed automatically with the TVās latest update and gives users no option to delete it (though you can hide it from your home screen). LG had previously expressed its intention to implement Copilot into its smart TVs, so this move isnāt a huge surprise. However, forcing the app on consumers with no option to disable it is certainly not something we expected. You can find out more at Tomās Hardware.
The Bottom Line: We would generally recommend against smart TVs since they exist to collect data on your viewing habits. If you already have a smart TV, we recommend disconnecting from your Wi-Fi and using a streaming box, like an Apple TV, instead.
Everything you need to know about Appleās latest software updates.
- The most recent iOS and iPadOS is 26.2
- The most recent macOS is 26.2
- The most recent tvOS is 26.2
- The most recent watchOS is 26.2
- The most recent visionOS is 26.2
Read about the latestĀ updates from Apple.
The correct answer is 7: None of the above. All of the things listed are potentially signs of a deepfake video, and should arouse some level of suspicion. Since all of these are also fairly common for regular people who are not deepfaking their videos, you should treat them with caution, not condemnation. Seeing one or two of these signs should tell you to start looking for more red flags.
There is far too much security and privacy news for us to cover it all. When building this newsletter, we look for scams, hacks, trouble, and news to illustrate the kinds of problems Apple enthusiasts may encounter in our private lives, and the self-defense we can practice to keep our devices, accounts, and lives secure. Our commentary focuses on practical advice for everyday people. This newsletter was written byĀ Cullen ThomasĀ andĀ Rhett IntriagoĀ and edited byĀ August Garry.
Interested in learning the most secure ways to protect your accounts? Check out:
|

