AI – iDefend https://www.idefendhome.com Ultimate Protection for Your Digital Life Thu, 06 Feb 2025 17:43:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.1 https://www.idefendhome.com/wp-content/uploads/cropped-idefend-favicon-32x32.png AI – iDefend https://www.idefendhome.com 32 32 Scams Created by AI are Fooling Over 50% of Their Targets https://www.idefendhome.com/blog/onlineprivacy/aiscamsfooling50percent/ Mon, 13 Jan 2025 18:26:50 +0000 https://www.idefendhome.com/?p=10026 Unsurprisingly, AI is being leveraged by cybercriminals to create cunning, convincing, and highly effective scams. They’re so effective that they’re fooling half of their intended victims. Here’s what to watch out for and how you can try to avoid falling for these sophisticated scams.

The post Scams Created by AI are Fooling Over 50% of Their Targets first appeared on iDefend.

]]>
AI_computer
Unsurprisingly, AI is being leveraged by cybercriminals to create cunning, convincing, and highly effective scams. They’re so effective that they’re fooling half of their intended victims. Here’s what to watch out for and how you can try to avoid falling for these sophisticated scams.

In this article

Back when AI models were becoming an everyday talking point, many experts and those in the cybersecurity industry predicted how scammers and criminals would begin commandeering these tools for their personal gain.

Unsurprisingly, this has proved increasingly true. This last year alone, studies have shown that over half of the criminals’ targets are falling for their AI-powered scams.

How Effective are These Scams?

According to a study conducted by Avant Research Group, when testing against their control group using traditional phishing emails, participants fell for nearly 54% of scams vs only 12%. This is an increase from 1 in 10 people to 5 in 10 people. 

That’s alarming.

Seeing as we’re still in the infancy of many of these AI tools and technologies, this means we’re likely to see even more sophisticated scams showing up over the next couple years. Who knows how high that number might get?

How Does This Affect You?

Based on the study’s findings, they ruled that the findings were useful (meaning we can rely on the numbers) in 88% of cases, and only had inaccurate results for 4% of the participants. This means that nearly 9 out of 10 times a potential victim will be seeing a scam that is so well developed that it has a 50% chance of convincing them.

These numbers should scare you, or at least draw your attention. How are you prepared to navigate a digital landscape where dangers like these exist, and are only going to keep getting better? How confident do you feel about landing in the 46% who didn’t fall for these scams vs the 54% who did?

Is There a Bright Side to All This?

Despite all this gloom and doom, there is some good news in the midst of this. AI models are also being trained by ethical hackers and cybersecurity professionals to recognize scams and flag or block them before they can become an issue.

Because of this, as long as you’re following good practices, common sense, and the usual tips when it comes to protecting yourself from scams, you’ll be alright in most cases. 

Make sure you’re doing the following

  1. Use some virus protection and active scanning, especially in your email accounts.
  2. Don’t click on emails you weren’t expecting.
  3. Don’t open attachments you don’t recognize.
  4. Never click on links in unsolicited emails.
  5. Always verify with the actual person instead of trusting them over email.
  6. If it sounds too good to be true, it probably is.

Falling for Scams? iDefend Can Help!

Fortunately, iDefend members enjoy access to an expert support team who are always happy to help you look at and review suspicious emails, help you adjust your antivirus and detection settings, and even recover from scams or cleaning up your devices if they get infected.

Learn more and get protected today! Try iDefend risk free and save 30%.

The post Scams Created by AI are Fooling Over 50% of Their Targets first appeared on iDefend.

]]>
Smart Assistants are Getting a Little Too Smart https://www.idefendhome.com/blog/onlineprivacy/smartassistantsgettingtoosmart/ Wed, 11 Sep 2024 17:29:25 +0000 https://www.idefendhome.com/?p=7205 While extremely convenient, smart assistants like Alexa, Siri, and Google may be listening in, learning from, and sometimes even sharing our private conversations. Are these devices getting a bit too smart for their own good? Is it worth the risk of having one? We’ll help you decide.

The post Smart Assistants are Getting a Little Too Smart first appeared on iDefend.

]]>
pexels-jonathanborba-14309815
While extremely convenient, smart assistants like Alexa, Siri, and Google may be listening in, learning from, and sometimes even sharing our private conversations. Are these devices getting a bit too smart for their own good? Is it worth the risk of having one? We’ll help you decide.

In this article

Technology has come a long way, and these days is continually trending toward greater levels of convenience backed by greater levels of automation. By taking user input out of the equation as much as possible, large tech companies appeal to wider and wider audiences.

When it comes to a user’s personal privacy, that’s often an afterthought (if these companies even think of it at all), or an easily-filled pothole on the smooth road to convenience. In the case of smart listening devices, is it worth sacrificing your freedom and privacy for a slice of easy street?

What are Smart Assistants?

Digital assistants can be traced back to the PDAs of the late 90’s and early 00’s. There are likely many older and more experienced folks who would still recognize the name “Palm Pilot”. While many of these devices undoubtedly paved the way for modern smartphones, they are hardly related to the privacy-encroaching tendencies of the plastic rectangles we sport in our pockets today.

Smart listening technology interprets a user’s words into commands for the device to execute. Simply put, saying something as simple as “Set a three minute timer” will immediately start a countdown clock, and after an “Add spring rolls to my shopping cart”, you’ll find yourself ready to check out and cook dinner.

Examples of smart assistants

Yes, most likely your smartphone contains this smart listening technology. You may have never really utilized it, but even something as simple as text-to-speech is taking advantage of this tech. 

In recent years, this same technology has birthed a multitude of smart devices, each with the capability to listen and interpret your commands. The most common of these are what are sometimes called “Smart Assistants”. Popular brands of these devices include: Amazon Alexa, Google Home, Apple’s HomePod, and even something like Microsoft Copilot falls into the mix.

What are the Risks?

See if you’ve ever noticed something like this happen: you’re having a conversation with a friend about some obscure topic. Let’s say kayaks. The only other thing in range of hearing you is your phone. Later, when you’re browsing online, you start seeing ads for kayaks. You’ve never googled “kayaks for sale” or anything like that, and yet here you are suddenly seeing ads for them.

This is a prime example of how the tech on your phone was listening without your consent and influencing what ads were then served to you.

If this alarms you, just wait until you hear about all the things Alexas and their ilk get up to.

Data collection

One of the primary concerns with these devices is how they are constantly harvesting data on their users. Of course, they will claim that none of this makes it past them to advertisers or data miners, but do you really want to trust big tech companies on something like that? 

Another common phrase you’ll see is something akin to “Help us develop new features” or “Improve your user experience”. While you may see some of those benefits, it’s always going to be at your own expense. By agreeing to similar terms you are effectively signing away your privacy and allowing them free reign of your personal life.

What would you do if you got a knock on your door one day and outside is a man or woman with a camera, tape recorder, and notepad. They’ll ask if they can quietly sit in the corner of your family room or perched on your kitchen counter. All they ask is to allow you to take as many pictures, notes, and recordings as they please. “You won’t even notice I’m there,” they promise.

That’s almost exactly what you’re agreeing to by bringing a smart assistant into your home.

Security

Besides the privacy aspect, these devices will have to be hooked up to your wifi network in order to broadcast. This means that if the device was ever hacked or compromised, a cybercriminal now has access to the rest of your home.

On the flip side of that, say one of your other devices gets compromised. A savvy hacker may then be able to take control of your smart speaker, allowing them to harvest and sell everything it hears you say.

Scams

Yes, there are scams linked to using these devices. This one is perhaps one of the most devious of all in that it doesn’t really have anything to do with the actual security or integrity of the device itself. This one relies on scammers setting up fake support websites, phone numbers, and help desks.

Then, when you ask your device to look up a customer support line for your issue, the scammers have ranked their results high enough that it takes those as gospel and spits the info back at you.

A lot of users don’t think twice about the search results their digital assistant brings them, and will happily click on the first option.

How Can I Make My Device Safer?

So, let’s assume you’ve read everything to this point and you still want a smart assistant. Are you just going to have to toss your privacy to the wind?

Luckily, there are some things you can do.

1. Adjust the device’s settings

Each of these smart assistants will have settings (typically accessible via a mobile app) that allow you to disable certain features, limit what kinds of things it can harvest, and even set active hours or schedules.

It’s always worth taking a few minutes to poke around in the device’s settings before you’re ready to actually use it.

2. Secure the network

Depending on how you want the device to interact with your household, you should consider putting it on its own network, or at least segregating it from your more important devices such as personal computers. In this way, even if a hack occurs, you’ll have an extra layer of protection.

3. Unplug it

For physical devices, this is the best thing you can do. As long as the device is unplugged, there’s no way it can listen to you. Sometimes, you may want to just unplug it if you’d rather it not overhear something, or simply to give yourself a break. 

Just be sure to plug it back in when you’re ready to use it!

iDefend Can Help You With Your Smart Devices

Fortunately, iDefend members enjoy access to an expert support team who are always happy to help you lock down your home. With a simple phone call, they’ll be able to assess and develop an action plan to help you get your digital security and privacy up to snuff.

Learn more and get protected today! Try iDefend risk free and save 30%.

The post Smart Assistants are Getting a Little Too Smart first appeared on iDefend.

]]>
Amid privacy concerns, Apple’s new AI is coming soon to an iPhone near you. https://www.idefendhome.com/blog/onlineprivacy/openaiapple/ Fri, 14 Jun 2024 17:02:46 +0000 https://www.idefendhome.com/?p=4742 Your next iPhone may know more about you than you do. What is Apple's new AI capable of, and how does Apple reassure its customers that their private life is safe and secure?

The post Amid privacy concerns, Apple’s new AI is coming soon to an iPhone near you. first appeared on iDefend.

]]>
appleai
Your next iPhone may know more about you than you do. What is Apple's new AI capable of, and how does Apple reassure its customers that their private life is safe and secure?

In this article

Artificial intelligence (AI) has taken the tech world by storm. With Microsoft, Google, Amazon, Facebook, Snapchat, and others all integrating a form of AI into their systems, Apple found themselves lagging behind. Now, with Apple’s recent AI announcement, they’ve officially entered the race in a big way. Apple’s home-grown AI, it’s partnership with OpenAI, and its integration with Apple iPhones, computers and other devices is groundbreaking. But it also gives rise to massive personal privacy concerns.

What is Apple’s AI capable of, and how does Apple reassure its customers that their private information is safe and secure? While security and privacy have always been a selling point for Apple products, what will they do with unprecedented access to an unlimited flow of surveillance and collection of personal information from the devices you use every day?

Your privacy… going, going, gone?

Apple promotes its new AI (called Apple Intelligence) as “personal intelligence built into your devices to help you write, express yourself, and get things done effortlessly, every day. All while setting a brand-new standard for privacy in AI.” It’s the first real application of AI as a personal assistant – built into devices you use every day. Naturally, and rightly so, watchdogs for personal privacy and security are alarmed at the risks with this emerging technology.

The cause for concern with the new Apple Intelligence is not just the amount of detail they will learn about you, but who has access to that data. Your phone, and Apple, will potentially know more about you than you do.

Apple Intelligence will be able to scan and interpret all your photos, making it easy to find a photo of concert tickets, driver’s license, recipes, and vacation spots with a simple request. It will be able to identify and recognize your children, spouse, and friends in your photos along with their role in your life. There are concerns about access to photos, videos, and voice where deep-fake videos and voice cloning have popped up across social media and are now used by scammers and online predators.

Siri will be supercharged due to the power OpenAI utilizes. With personal context and access to everyday life activities, Siri will use OpenAI to access calendar events, messages, and other apps to pull together information for your benefit. If you have an appointment for a meeting in the office, and then a piano recital for your daughter an hour after, Siri will plan each out accordingly and access Apple Maps to notify you when you need to leave to get to your next event on time.

If you ask when your mom’s flight is going to land, Siri will pull up the information and notify you exactly when the flight takes off and lands with the information gathered in the email. Then Siri can remind you when it is time to leave to pick her up while also taking live traffic into account. If you forgot what dinner plans you made with your mom, Siri will access your messages and pull up the text to remind you of where and when the dinner reservation is taking place.

As useful as the tools will be, the obvious question here is, if the AI can see and interpret your photos, emails, texts, calendars, and everything else, who has access to all your private information and daily activity? Can you trust them – or the other companies they affiliate with such as OpenAI? Will the government be able to get access to all this data anytime they want?

Clearly, questions about security and privacy remain. We are literally hitting just the tip of the iceberg with what Apple Intelligence will learn about us, and what it will do for us.

Voices of concern

Some of the benefits of this AI integration into your personal life might sound pretty good right now, but there is always a cost associated with such groundbreaking technology. If you’re worried about your privacy and security, you’re not alone. In fact most of the tech world is voicing their concerns about this Apple-AI integration.

Elon Musk has openly stated, “It’s patently absurd that Apple isn’t smart enough to make their own AI yet is somehow capable of ensuring that OpenAI will protect your security and privacy! Apple has no clue what’s actually going on once they hand your data over to Open AI. They’re selling you down the river.” Musk has also gone so far as to consider a ban of Apple devices from his businesses as a security measure with the announced Apple-OpenAI partnership.

Musk voiced concern that Microsoft, which owns half of OpenAI, has disbanded its ethics oversight division. “There is no regulatory oversight of AI, which is a major problem. I’ve been calling for AI safety regulation for over a decade!” Regulation may be the only hope of keeping our information private, but with AI being so new to our world, that may be years out. Of course, it’s highly likely government regulators won’t fully understand this and may cause more harm than good.

Sam Altman, CEO of OpenAI, stated that AI is, “the greatest technology humanity has yet developed” and this uncharted territory is thrilling yet should be carefully walked. Altman continued, “I’m particularly worried that these models could be used for large-scale disinformation. I think people should be happy that we are a little bit scared of this.”

Apple says its AI will “usher in a new era of privacy”. The company says it will ask you for permission to use your private data before both Apple and OpenAI will have access to it, but once you give permission, your information is fair game. Apple claims that data from Apple Intelligence will “mostly stay on your device itself. If it needs to go to the cloud (and access ChatGPT), it will be on Apple’s secure server, not stored by OpenAI.”

Security professionals will be watching closely to see how this and other concerns will play out.

Recommendations

Apple Intelligence is deeply integrated into iOS 18, iPadOS 18, and macOS Sequoia that is due to be released in Fall 2024. It will only be available on iPhone 15 Pro, iPhone 15 Pro Max, and newer iPhone models, plus iPad and Mac computers with M1 and later.

At this point, consider waiting to upgrade to iOS 18 with Apple Intelligence (If you have an iPhone 15 or newer). If you end up having to buy a new iPhone, you may want to consider buying an older model. If you must have the newest iPhone when it’s released later this year, or next year, consider turning off specific Apple Intelligence features you aren’t comfortable with, or disable it altogether.

For the next couple of years while all this new AI technology unfolds, be cautious and don’t give up your privacy rights. Realize that AI is not going away, and that all smart devices including Android will eventually have built-in AI. Be careful about giving Apple or any other tech company permission to use AI to monitor, track, and use you or your family’s personal information.

Whether we like it or not, AI will increasingly play a larger role in our lives. As it unfolds, we must continue to do everything we can to protect our privacy, our freedom, and not allow big tech or government to access, monitor, or control our digital lives.

 

Protecting your family’s online privacy is critical – iDefend can help

Everyone needs a personalized protection and support service for the digital age. iDefend protects your online privacy from data traffickers, scammers, predators, and unwanted surveillance.

Learn more and get protected today with iDefend. Try it risk free and save 30%.

The post Amid privacy concerns, Apple’s new AI is coming soon to an iPhone near you. first appeared on iDefend.

]]>
A.I. Can Copy Your Voice https://www.idefendhome.com/blog/onlineprivacy/aivoicecloningscams/ Thu, 02 May 2024 16:51:19 +0000 https://www.idefendhome.com/?p=3689 For several years now, AI has been employed to maliciously copy and replicate real people's voices in order to scam and trick loved ones and friends.

The post A.I. Can Copy Your Voice first appeared on iDefend.

]]>
ai-stockphoto
For several years now, AI has been employed to maliciously copy and replicate real people's voices in order to scam and trick loved ones and friends.

In this article

Scams aren’t going anywhere. The trouble is, they’ve morphed from the Nigerian Prince’s gold bouillon into a mysterious phone call from what sounds like your terrified daughter in the clutches of some vile kidnappers.

Voice cloning scams have already caused countless damage in financial losses, as well as other damaging experiences such as sextortion and blackmail. It’s sobering to think of how much more these will grow given a few years.

The unfortunate part is how easily these scammers can steal small clips of your voice and use AI software to extrapolate that into even full-blown conversations! With all the advancements in AI, gone are the days of simply recording you saying “Yes” and playing that behind a clip of someone asking for your permission to do something nefarious.

The good news is there are several things you can do to help prevent this from happening to you, as well as what things to be vigilant for if you encounter one of these scams in the wild.

How to Spot the Scam

By far the most common use of this scam is to solicit money from friends and family members. Because so much of our lives are accessible via a quick online search and on social media, it’s not difficult for a scammer to assemble a basic digital profile of a potential victim.

Because of this, you’ll want to be especially wary of any phone calls you receive that sound like someone you know asking you for unusual amounts of money.

Examine the phone number

This is one of the first things you should do when you get one of these calls, but take a look at the phone number. Chances are, you won’t have it saved in your phone, and it may even have the wrong area code. When in doubt hang up.

Verify with the real person

When this does happen, the best thing you can do is to immediately hang up and call that person directly yourself. Of course, if circumstances allow you could also pay them a quick visit in person. Simply ask them to confirm whether or not it was them that called you.

If they have no idea what you’re talking about, immediately block and report the number from which you received the phony call.

Pay attention to tells

If you are otherwise unable to contact your acquaintance and verify anything, pay attention to anything in their voice or speech patterns that sounds unusual. Sometimes, an AI clone will subtly give itself away.

How to Stop the Scam

The following are things you can do to safely limit your exposure to these scams. Remember that just because a scammer can copy someone’s voice doesn’t automatically earn them any money; someone has to fall for it first.

Don’t answer unknown calls

This one may be hard for some of us who grew up before smart phones and caller ID were widespread conveniences. Nowadays, sadly, it’s all too common for unknown numbers to be spam, robo, or scam calls.

The simplest way you can avoid most phone scams is to simply not answer calls from unknown numbers.

Get into the habit of following up with those you call

Put yourself on the other end of these calls for a minute. Let’s say you are dialing a new friend or acquaintance and they haven’t yet saved your number. If they, too, are following best practices, they should ignore your call and let it go to voice mail.

When this happens, you should leave them a quick message, but another thing you can do that’s often even faster is to send them a simple text message. Just let them know it was you who called and you’d like to speak with them when they’re available.

We know this step may seem innocuous or mundane, but it’s actually a great habit to get into since the vast majority of people will check a text message much sooner and faster than a voice mail.

Verify with the real person

Yes, this is the same suggestion as before, but it holds just as much weight being listed here as well. It’s always good to verify with the real person before sending them money.

Save numbers in your phone

For anyone you know and trust, save their number in your phone. This will spare you the trouble of having to play “phone tag” and will give you more peace of mind knowing that you are talking to the real person.

Be extra cautious

Even if you’ve saved someone’s number, always remember to be cautious and verify with them before sending money. Scammers steal billions of dollars every year, and this number will only increase as technology gets better, so please be cautious and safeguard your money.

The post A.I. Can Copy Your Voice first appeared on iDefend.

]]>
Snapchat AI is Your Teen’s Private Chatbot – Free From Your Prying Eyes https://www.idefendhome.com/blog/familysafety/snapchatai/ Mon, 22 Apr 2024 20:39:02 +0000 https://www.idefendhome.com/?p=5618 Snapchat, known for its disappearing messages and creative filters, has evolved over the years to slowly incorporate advanced AI features.

The post Snapchat AI is Your Teen’s Private Chatbot – Free From Your Prying Eyes first appeared on iDefend.

]]>
pexels-shvets-production-7194009
Snapchat, known for its disappearing messages and creative filters, has evolved over the years to slowly incorporate advanced AI features.

In this article

In the digital landscape of today, social media platforms have become integral parts of our daily lives. While these innovations offer convenience and entertainment, they also raise significant concerns, especially when it comes to the safety and well-being of teenagers.

Among these concerns is the omnipresence of Snapchat’s AI, which can be accessed at any time, and the potential dangers it might pose.

Chatbot

One of the most striking aspects of Snapchat’s AI is its chatbot, an interactive feature that allows users to ask questions, seek advice, and engage in conversations. For teenagers, this seemingly harmless function can present a myriad of risks. The allure of a “digital friend” that is always available to chat might lead teens to seek guidance from the AI on sensitive or personal matters.

Whether it’s questions about relationships and sex, mental health, or even more troubling subjects like self-harm, the AI provides immediate answers without the nuance and empathy of a human response.

Teenagers are impressionable

The danger here lies in the misconception that this AI is a reliable source of information and support. Teenagers, who are still developing their critical thinking skills, might take the AI’s responses at face value without questioning its accuracy or understanding its limitations. This can lead to misinformation, misguided decisions, and a lack of human connection when they might need it most.

Privacy Concerns

Furthermore, the constant presence of the Snapchat AI raises privacy concerns. Every interaction with the chatbot is recorded and stored, contributing to the vast data pool that fuels Snapchat’s algorithms.

This data can be used to create targeted advertisements, personalize user experiences, and even predict behaviors. For teenagers, whose online behaviors are still evolving, this data collection can have long-lasting consequences.

You Can’t Remove It

Another alarming aspect is the inability to remove the AI from the app. Unlike traditional chatbots or apps where users can opt out or disable features, Snapchat’s AI is deeply integrated into the platform. The only way to remove it is by subscribing to Snapchat+, the premium version of the app.

This creates a dilemma for parents and teens alike. As a parent, if you don’t want your child to interact with the AI, there is no way to actually hide or get rid of it, giving your child unlimited access.

What to Do as a Parent

As parents navigate the complexities of raising teens in the digital age, understanding these dangers is crucial. Initiating conversations about online safety, critical thinking, and the limitations of AI can empower teens to make informed decisions. Encouraging them to seek human support when needed, whether from trusted adults or professional resources, can provide a balance to the allure of the always-available AI friend.

While Snapchat’s AI offers convenience and entertainment, its potential dangers should not be underestimated. From providing questionable advice to collecting vast amounts of personal data, the implications for teenagers are significant. As we move forward in this digital era, it is essential for parents, educators, and society as a whole to remain vigilant, informed, and proactive in safeguarding the well-being of our teens in the digital realm.

The post Snapchat AI is Your Teen’s Private Chatbot – Free From Your Prying Eyes first appeared on iDefend.

]]>