Share

Scammers Are Using AI Deepfakes to Steal Millions of Dollars, Including Real Estate—Here’s How You Can Protect Yourself From Fraud

[ad_1]

Security experts are sounding the alarm about the potential implications of artificial intelligence (AI)-generated deepfakes, which are artificial audio or video files that have been manipulated to portray someone’s likeness, in real estate fraud. 

Earlier this year, authorities in Hong Kong reported that a group of scammers used deepfake technology to steal more than $25 million from a multinational company by impersonating the company’s chief financial officer on a video call with an employee, who they tricked into transferring the funds. 

As deepfakes become more realistic and convincing, experts are worried fraudsters will use the technology to impersonate professionals in a real estate transaction, with the aim of intercepting a payment or collecting sensitive information. Even savvy real estate investors could experience financial losses or identity theft if they became the target of such a scheme. 

The Elusive Red Flags 

Any financial transaction carries the risk of fraud, but in the past, it was easier for investors to protect themselves with proper education. For example, buyers could look out for misspellings in an email address or other signs of wire fraud and verify the transaction with the real estate agent over the phone. But now, those red flags are tougher to spot—a fraudster could spoof a real estate agent’s phone number and use deepfake audio to impersonate their voice. 

Actor and filmmaker Jordan Peele warned us about the dangers of generative AI in 2018 when he created a deepfake video of Barack Obama warning Americans about misinformation online. “This is a dangerous time. Moving forward, we need to be more vigilant with what we trust from the internet,” Obama appeared to say in the video, though viewers were actually witnessing a performance by Jordan Peele after careful editing and 56 hours of processing. 

There was something off about the video beyond the humorous script—it contained characteristics of deepfake videos at the time, like jerky facial movements and changes in lighting. But as the technology gets more advanced, it’ll become easier for scammers to fool even the folks who are looking closely. 

The National Association of Realtors notes that in-person communication at some point in the transaction will be vital to protecting people from fraud. That means long-distance investors will need face-to-face contact with a local agent, at least to get their hands on an authentic phone number to call directly for verification. 

Scammers can manipulate more than audio and video with AI as well. AI systems can generate falsified documents that contribute to seller impersonation scams and other schemes. 

Fraud on the Rise—and What’s Being Done to Combat It

As AI technology advances and becomes more accessible to everyday scammers, investment fraud is becoming more prevalent, and victims are suffering greater financial losses. Investment fraud led to a record of more than $4 billion in stolen funds in 2023, while imposter scam losses totaled $2.7 billion, according to data from the Federal Trade Commission. Investment-related scams resulted in a median of $7,768 in losses. People were also more likely to report identity theft in 2023 than in previous years

AI scams were one of the five most common types of investment fraud in 2023, according to an analysis of FBI and FTC data conducted by Carlson Law. Software that detects AI content can be helpful, but it’s not 100% accurate. Neither is content provenance, which helps increase transparency around where content came from and whether it was created by a human or AI. The Content Authenticity Initiative, a group of tech companies, academics, and other organizations, is working toward an industry standard for verifying content authenticity through open-source development. 

Another problem is that fast-paced advances in AI technology require lawmakers to quickly adapt. And meanwhile, tech companies are increasingly making AI tools easily accessible to everyday people. 

Policymakers are attempting to catch up, however. Last fall, the Biden administration issued an Executive Order designed to establish security standards, encourage the development of privacy measures, prevent AI civil rights violations, capture AI’s potential for healthcare and education, promote research on labor-market effects, and ensure government agencies use the tools responsibly. 

In February, the FTC also finalized the Trade Regulation Rule on Impersonation of Government and Businesses, which the Commission chair said was insufficient by the time it was completed due to the evolving technology. The rule allows the FTC to take scammers who impersonate businesses and governments to federal court. 

In light of increasing complaints from individuals about impersonation fraud, the FTC also proposed a supplemental rule that would extend the protections to cover individual victims of fraud. Additionally, the Commission is asking for public comment on whether the revisions should “declare it unlawful for a firm, such as an AI platform that creates images, video, or text, to provide goods or services that they know or have reason to know is being used to harm consumers through impersonation.”

If the latter provision were included, it would allow the FTC to hold tech companies liable for providing AI tools that facilitate scams, which might prompt tech companies to be cautious about making new deepfake technology available to their users. 

In a time when foolproof detection tools and adequate protections and enforcement measures are not yet available, media literacy is especially critical. Investors should be skeptical in general of anything that doesn’t feel right or sounds too good to be true, double-check the authenticity of documents and payment instructions, and stay in the know about new technology and current scams. 

How to Protect Yourself

Sometimes, AI can help solve the problem it created by detecting fake documents based on learned patterns. In late 2022, Intel launched a deepfake detection platform that can spot AI-generated video with 96% accuracy—for now. But AI detection will always be a step behind innovation, so it’s important for investors to take other precautions. 

The National Cybersecurity Alliance recommends the following:

  • Be mindful of what you share: Switch your social media settings to private or limit public access to your personal photo and video content using watermarks on any publicly available images. 
  • Follow AI news: Keep an eye on recent updates to AI technology and emerging scams so you know what to look for. 
  • Beware of phishing attempts: Be suspicious of anything that comes from an unknown source. Be sure to verify the identity of the sender before following payment instructions in an email or text, clicking any links, downloading files, or sharing any sensitive information. When communicating via video call, beware of urgent demands or hesitancy to connect directly by phone or in person. 
  • Report deepfakes: If you discover deepfake content impersonating the likeness of you or someone you know, report the content to the platform for removal, and file a complaint with federal authorities. Get help from a legal professional if necessary. 

A Double-Edged Sword

Despite the risks of fraud, advances in generative AI continue to provide important, time-saving resources for real estate professionals, even while contributing to a higher risk of fraud. Investors and agents are already using chatbots to streamline communications, but the real potential of generative AI in real estate has yet to be fully realized. McKinsey & Company estimates the added value to the real estate industry as a result of generative AI could be between $110 billion and $180 billion. 

Already, McKinsey says real estate companies have seen greater than a 10% increase in net operating income by using AI to streamline processes, improve customer satisfaction and tenant retention, develop new sources of revenue, and make faster (and smarter) investment decisions. 

Today, most real estate investors comb through multiple data sources to analyze whether a market or property will be profitable. However, McKinsey notes that an advanced generative AI tool with access to the right data can perform a multifaceted analysis to prioritize listings investors should look into. This would be especially beneficial for newbies with no investment history to inform their decisions. A fine-tuned AI tool might allow a hopeful investor to simply ask, for example, “Which available duplexes in Cleveland should I invest in?” 

That’s just one way AI tools can give investors more free time and allow them to make more profitable decisions. 

The Bottom Line

There’s no question that AI technology will disrupt the real estate industry, creating new vulnerabilities in transactions while also enabling investors to act with precision and communicate with ease. However, the pace at which generative AI tools are advancing and becoming accessible will undoubtedly create challenges for policymakers and agencies dedicated to preventing fraud. 

If you can take advantage of the technology in your everyday work while also staying informed and taking steps to prevent fraud, AI may have a positive net impact on your business.

Ready to succeed in real estate investing? Create a free BiggerPockets account to learn about investment strategies; ask questions and get answers from our community of +2 million members; connect with investor-friendly agents; and so much more.

Note By BiggerPockets: These are opinions written by the author and do not necessarily represent the opinions of BiggerPockets.

[ad_2]

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *