Editor’s Note: Arun Vishwanath is a technologist who studies the people problem of cybersecurity. The views expressed in this commentary are his own. View more opinion at CNN.
The continued prosecution of “all the President’s men” does little to stop the Russians from attempting to influence America’s upcoming midterm elections. And reports from Missouri to California suggest they are already looking for our cyber weaknesses to exploit.
Chief among their tools: spear phishing, which are emails containing hyperlinks to fake websites. Russians used this method to hack into the Democratic National Committee (DNC) emails and set in motion their 2016 influence campaign.
After two years of congressional hearings, indictments and investigations, spear phishing not only continues to be the most common attack used by hackers, but the Russians are still trying to use it against us.
That’s because the method has become even more virulent, thanks to the availability of sophisticated malware, some stolen from intelligence agencies; troves of people’s personal information from previous breaches; and ongoing developments in machine learning that can deep-dive into this data and craft highly effective attacks.
Just last week, Microsoft blocked six fake websites that were likely to be used by the same Russian intelligence unit responsible for the 2016 DNC hack to spear phish American targets.
But the internet is vast and there are many more fundamental weaknesses still available to exploit.
Take the URLs with which we identify websites. Thanks to Internationalized Domain Names (IDNs) that allow websites to be registered in languages other than English, many fake websites used for spear phishing are registered using homoglyphs — characters from languages that look like English language characters. For instance, a fake domain for Amazon.com could be registered by replacing the English “a” or “o” in the word “Amazon” with their Cyrillic equivalents.
Such URLs are difficult for people to discern visually, and even email-scanning programs trained to flag words like “password” which are common in phishing emails, like the one the Russians in 2016 used to hack into Clinton campaign chairman John Podesta’s email, can be tricked. And while many browsers prevent URLs with homoglyphs from being displayed, some – like Firefox – still expect users to alter their browser settings for protection.
Making things worse is the proliferation of Certification Authorities (CA), the organizations issuing digital certificates that make the lock icon and HTTPS appear next to a website’s name on browsers. While users are taught to trust these symbols, an estimated one in four phishing websites actually have HTTPS certificates. This is because some CA’s have been hacked, meaning there are many rogue certificates out there, while some others have doled out free certificates to just about anyone. For instance, one CA last year issued certificates to 15,000 websites with names containing some combination of the word PayPal—nearly all for spear phishing.
Besides these, the problem of phony social media profiles, which the Russians used in 2016 for phishing, trolling and spreading fake news, remains intractable. Just last week, the Israel Defense Forces (IDF) reported a social media phishing campaign it attributed to Hamas, luring its troops to download malware using fake social media profiles on Facebook, Instagram and Whatsapp.
Also last week, Facebook, followed by Twitter, blocked profiles linked to Iranian and Russian operatives that were being used to spread misinformation.
These attacks, however, reveal a critical weakness of influence campaigns: by design, they utilize overlapping profiles in multiple platforms. The problem is that social media companies police their own networks, keeping information they discover about such activities in their own “walled gardens” instead of sharing it more widely.
A better strategy would be to host data on suspicious profiles and pages in a unified, open-source repository that accepts inputs from other media organizations, security organizations and even users who find things awry. Such an approach would help detect and track coordinated social media influence campaigns – which would be of enormous value to law enforcement and even media organizations big and small, many of which get targeted using the same profiles.
A platform for this could be the Certificate Transparency framework, where digital certificates are openly logged and verified. It has already been adopted by many popular browsers and operating systems. For now, this framework only audits digital certificates, but, it could be expanded to encompass domain-name auditing and social media pages.
Finally, we must improve user education. Most users know little about homoglyphs and even less about how to change their browser settings to protect against them. Furthermore, many users, after being repeatedly trained to look for HTTPS icons on websites, have come to implicitly trust them.
Many even mistake such symbols to mean that a website is legitimate. Because even an encrypted site could be fraudulent, users have to be taught to be cautious and assess website factors ranging from the spelling used in the domain name, to the quality of information on the website, to its digital certificate and the CA who issued it.
Such initiatives must be complemented with better, more uniform Internet browser design, so users do not have to tinker with settings to ensure against phishing.
Achieving all this requires leadership, but the White House, which ordinarily would be best positioned to address these issues, recently pushed out its cybersecurity czar and eliminated the role. And when, according to the Government Accountability Office, federal agencies have yet to address over a third of its 3,000 cybersecurity recommendations, the President instead talks about developing a Space Force.
Last we knew, the Martians hadn’t landed, but the Russians sure are probing our computer systems.