you get important news and warnings about security and privacy on internet, plus a bonus for investors!
(Be patient – loading of this page takes few seconds.)
On the PRIVACY POLICY page, you will find my recommendations for a broad strategy to protect your computer from hackers.
On this page, I give you the latest news and advice on this subject. You alone can take care of your own security and privacy and this requires some knowledge, strategy and constant vigilance.
If you know of any security news sources in German or Polish, send me their web addresses and I will try to add them to this page.
DISCLAIMER
* Copyrights belong to each article's respective author.
** Although this page should be free from tracking and other hazards, I can't guarantee that, after you click any links to external websites.
1Password CLI brings seamless biometric authentication to your favorite terminal-based editor, Neovim.
As a full-time Neovim user, the more things I can do without leaving my terminal, the more efficient my development workflow can be. However, command line tools that require authentication can present a potentially big problem: They all have their own ways of storing credentials, often using plaintext files stored on disk. We can mitigate this and keep everything safe and secure in 1Password using 1Password CLI!
Neovim is a flexible text editor that runs in a terminal. It is a modal editor, which means there are several “modes” that are optimized for different types of interactions with the interface. For example, there’s Insert mode for typing text, Visual mode for selecting text, Normal mode for navigating around and manipulating text, and Command mode for running commands.
While very basic with the default configuration, it can also be highly customized and endowed with all the same magic as a full-fledged IDE, while still maintaining the speed and efficiency that comes with learning to use a modal editor effectively.
If you’re starting a Neovim configuration from scratch, I highly recommend using Lua (as opposed to Vimscript). If you’re already familiar with Neovim, feel free to skip ahead. To learn the basics of using Neovim, open Neovim by running the nvim
command, then type :Tutor<Enter>
to run the tutorial.
To kickstart your Neovim configuration, we’ll start with kickstart.nvim, an open-source configuration file that you can use to build your own configurations and personalizations. kickstart.nvim
does several things for us. It sets some of the most common options to more sensible defaults, and installs packer.nvim, a popular plugin manager for Neovim.
It also installs some popular plugins via packer.nvim
, including:
git
integration.To use this as your starting configuration, simply download the init.lua file and place it at ~/.config/nvim/init.lua
. Then, from a terminal, run nvim
and wait for plugins to install before restarting Neovim by typing :q<Enter>
to exit, then nvim
to open it again.
Recently I found a Neovim plugin called octo.nvim that provides a nice interface for searching issues, applying labels, and even adding comments and Pull Request reviews, all without ever leaving Neovim. This plugin uses the GitHub CLI to interact with GitHub via the GraphQL API.
Unfortunately, it only seemed to support authentication using the GitHub CLI’s built-in credential manager (the gh auth login
command). However, I already had a GitHub token in 1Password and I didn’t want to export that to another place I’d have to remember if I ever needed to reset my token. I set off on a mission to make octo.nvim
and the GitHub CLI integrate with 1Password CLI to retrieve my token directly from 1Password.
To make that possible, I needed to make a small change to octo.nvim
that would allow the plugin to dynamically request the token only when needed. I made a small Pull Request which added a configuration option called gh_env
(short for “GitHub environment”) which would allow the user to pass a set of environment variables, or a function that returns a set of environment variables, that would be used when running GitHub CLI commands.
This Pull Request was merged quickly, which then allowed me to easily integrate octo.nvim
with 1Password CLI using my own plugin, op.nvim, a 1Password plugin for Neovim. The op.nvim
plugin provides some first-class editor features for 1Password, like a secure notes editor and a sidebar for favorited items and secure notes.
But what’s particularly interesting in this case is the native Lua API bindings to 1Password CLI. This means you can run 1Password CLI commands in a way that just feels like writing Lua code. For example, require('op.api').item.get({ 'GitHub', '--format', 'json' })
will retrieve an item from 1Password called “GitHub” in JSON format.
If you haven’t already, install 1Password CLI and the GitHub CLI. You may also want to check out the 1Password Shell Plugin for the GitHub CLI!
Before we can interact with GitHub via the GitHub CLI in Neovim, we first have to create an access token to use for GitHub authentication. Open the GitHub developer settings page and create a new “classic” token. In the “Note” field, write “Neovim” (or anything that will remind you what it is used for) and grant it the repo
, read:org
, and write:org
permission scopes.
Then generate the token and save it to your GitHub login item in 1Password, under a field called “token”.
To install the required Neovim plugins, open the ~/.config/nvim/init.lua
file you created earlier. Near the top, where you see the other use
statements, add the following snippet of Lua code, which will install octo.nvim
and op.nvim
:
use({
'pwntester/octo.nvim',
requires = {
-- 1Password plugin for Neovim
'mrjones2014/op.nvim',
-- another plugin to make the UI a bit nicer
'stevearc/dressing.nvim',
},
})
Next, jump to near the bottom of the file, add a new section for the configuration of the octo.nvim
plugin, and add the following code:
require('octo').setup({
gh_env = function()
-- the 'op.api' module provides the same interface as the CLI
-- each subcommand accepts a list of arguments
-- and returns a list of output lines.
-- use it to retrieve the GitHub access token from 1Password
local github_token = require('op.api').item.get({ 'GitHub', '--fields', 'token' })[1]
if not github_token or not vim.startswith(github_token, 'ghp_') then
error('Failed to get GitHub token.')
end
-- the values in this table will be provided to the
-- GitHub CLI as environment variables when invoked,
-- with the table keys (e.g. GITHUB_TOKEN) being the
-- environment variable name, and the values (e.g. github_token)
-- being the variable value
return { GITHUB_TOKEN = github_token }
end,
})
Then, close and reopen Neovim, and run the :PackerSync<Enter>
command to install the new plugins and apply configuration changes.
With this configuration, the octo.nvim
plugin will automatically request authorization via 1Password CLI, and if enabled, even use biometric authentication via 1Password! To try it out, open Neovim from a GitHub repository directory and run :Octo issue list
to list issues in the GitHub repository.
Enjoy using in Neovim! If you run into any snags or just want to share your experience, join us in the 1Password Developer Slack.
Here at 1Password, we’re big fans of two-factor authentication (2FA). It adds an extra layer of protection to your online accounts, making it much harder for attackers to break into them.
One of the strongest forms of 2FA is a FIDO2/WebAuthn hardware security key, like a YubiKey. That’s a small USB dongle that you plug in to your device, or tap via NFC, to authenticate who you are.
We recently introduced the option for 1Password Business admins to enforce this type of 2FA inside their organizations. Once enabled, all team members will be required to use a physical security key when they first sign in on a new device at work.
1Password is the only major password manager that gives you the choice to enforce FIDO2/WebAuthn hardware security keys in this way.
We understand that the strength of your security matters. That’s why we’re giving you the choice to level up your digital defenses by ensuring your team is using the strongest possible form of 2FA with 1Password.
“YubiKeys provide an extra layer of protection for your 1Password account,” said Derek Hanson, vice president, solutions architecture and alliances, Yubico. “With phishing-resistant YubiKeys, our customers receive the highest level of hardware-based security and a great user experience for those who want to use the same security key across services, browsers and applications.”
2FA is designed to prove that you or someone you trust – and not a criminal – is trying to access or sign in to something.
There are many different ways to use 2FA, most of which revolve around special one-time codes:
Security keys are a particularly strong form of 2FA for two reasons. First, it’s resistant to phishing. An attacker could send a fake but seemingly legitimate email asking you or another team member for a TOTP, or a 2FA backup code. A FIDO2/WebAuthn security key, meanwhile, only works with the owner’s chosen (and legitimate) websites and apps.
Second, hardware security keys are a possession factor, which means that authentication is tied to a physical object. It’s highly unlikely a criminal will target you (or one of your co-workers) specifically, and then travel to your location and try to steal your key. The process is simply too expensive and time consuming.
Instead, criminals are more likely to try other tactics, like phishing, that can target many people at once and be initiated remotely.
Security keys are also a small step toward a passwordless future. They eliminate one-time codes, which is one less piece of information that you and your co-workers have to copy or type out.
1Password supports all FIDO2/WebAuthn security keys, including those made by Yubico.
Enforcing security keys eliminates TOTPs from the process of signing in to 1Password, while strengthening your overall security by combating phishing attacks, which are increasing in frequency and sophistication.
Once enabled, this requirement will cover all the 1Password apps that your team uses for work, including 1Password 8 for Mac, Windows, and Linux.
To enforce hardware security keys at your organization:
Your co-workers will then need to add their security keys the next time they sign in or unlock 1Password.
Strengthen your security by enforcing FIDO2/WebAuthn keys in your organization. It will safeguard your team’s data and give you peace of mind, allowing you to focus on other tasks at work. You’ll also be helping your co-workers develop good security habits inside and outside the office – a crucial step toward building a strong culture of security.
Ready to get started? Read our support page for step-by-step instructions on how to enforce FIDO2/WebAuthn security keys when signing in to 1Password.
Read our support pageAlmost everyone understands what passwords are, and how they work. But passkeys? That’s a different story.
Here at 1Password, we’re excited about passkeys, which let you create online accounts and securely sign in to them without entering a password.
But we know it’s early days, and the technology hasn’t gone mainstream (yet!)
Many people don’t know what a passkey is, or have heard an explanation that isn’t quite right. Here, we’re going to address some of the most common misconceptions so you can better understand how passkeys work, and use them with total confidence.
Many of us use biometric authentication to unlock our devices and access our favorite online accounts. But in these scenarios, your biometrics don’t eliminate your password.
Passkeys, meanwhile, act as a replacement for traditional passwords.
Here’s a quick summary of how passkeys work:
Passkeys leverage an API called WebAuthn. Instead of a traditional password, WebAuthn uses public and private keys – otherwise known as public-key cryptography – to check that you are who you say you are. The advantage of this approach is that you never have to share your private key (hence the name), and the public key is useless to an attacker on its own.
If there was a password behind every passkey, it would still be possible to “phish” the account owner. Passkeys are resistant to phishing because there’s no plaintext password or ‘secret’ that the user can be tricked into sharing, or that an attacker can try to intercept. This makes passkeys a more secure option than a traditional password.
At first, websites and apps will likely offer passkeys alongside traditional password authentication. That way you’ll have a choice, and can use both methods in tandem if you wish.
Some articles have implied that a Bluetooth connection is required to successfully authenticate and sign in to accounts using passkeys.
That’s simply not true.
When you create a passkey, the website will ask you to confirm your authenticator. This could be your phone, tablet, PC … or, in the not so distant future, 1Password. The next time you want to sign in, your device will ask you to authenticate using your face or fingerprint as a security measure, but that’s it.
Bluetooth only plays a role if you create a passkey using one of the solutions offered by Apple, Microsoft, or Google, and then need to access that same passkey from a device that sits in a different company’s ecosystem.
For example, let’s say you create an online account with a passkey using Google’s password manager on your Android phone. And then you want to access that same account on your Windows PC. In this scenario, you’ll normally be prompted to authenticate using your Android phone.
Bluetooth is required to check that your Windows PC and Android phone are physically close to each other. (This is to prevent phishing.) But passkeys don’t rely on Bluetooth’s security properties to secure the actual sign-in process.
That’s why if you’re using the same device, or a solution that syncs your passkeys between devices, you don’t need a bluetooth connection.
A single passkey isn’t a master key that can unlock all of your online accounts. You’ll still need to create a passkey for each online account.
That might sound a little tedious, but in practice passkeys are incredibly convenient to create, store, and use. That’s because:
You don’t have to create anything manually. Your authenticator will generate a passkey – which contains a public and private key pair – on your behalf.
Every passkey is strong by default. So you don’t have to worry about whether your private key is long or random enough.
You don’t have to remember or type out your passkeys. Your private key is stored on your device, and retrieved automatically when you want to sign in to your account. A copy of your public key is stored with the account provider so you never have to type it out. Instead, your passkey is processed seamlessly in the background when you select ‘Sign in’.
Your phone is a safe place to store your passkeys. For starters, most hackers won’t travel to wherever you are because pickpocketing is neither cheap nor time effective. Instead, attackers will likely try other tactics that don’t require them to leave their computer.
If someone did manage to steal your phone, it would still be difficult for them to find and exploit your passkeys. That’s because they would need to unlock your device first. If you’ve secured your phone with biometrics, or an alternative method that’s difficult to guess – like a strong and unique password – an attacker will have a hard time breaking in and accessing your passkeys.
Your passkeys are well protected, even if a hacker managed to steal your phone.
Your confidential passkey data (e.g. the private half of every key pair) is also stored in a Trusted Platform Module (TPM) that is virtually impenetrable.
The bottom line is that you can rest easy knowing that your passkeys are well protected, even if a hacker managed to steal your phone.
What happens if you arrive at work and realize you’ve forgotten the phone that has all your passkeys? Will you be locked out of all your online accounts? Not necessarily.
Google, Apple, and Microsoft will sync your passkeys across devices using their respective cloud-based storage services. So if you create a passkey using an iPhone, you can access the same passkey on your other Apple devices via iCloud.
Okay, but what happens if you’ve forgotten your iPhone, but need to use a Windows PC in a public library? In this scenario, you should be given a second option to sign in. For example, a website might send you a “magic link” — a one-time link that lets you instantly sign in — to your chosen email address.
Passkey support is also coming to 1Password! (Sign up to our passwordless newsletter for updates!) This will let you access your passkeys on all your devices, regardless of which operating system they run, and any major web browser. That way, there’s no need to worry if you leave your phone at home one day.
It’s natural to worry about what would happen if you broke your phone. Or what would happen if you left your laptop in a public place, like a cafe, and went back only to discover it had vanished.
As we’ve already covered, it’s possible to sync your passkeys between devices. Apple, Google, and Microsoft will offer to sync your passkeys within their respective ecosystems. And, later this year, you’ll be able to use 1Password to create, store, and seamlessly sync passkeys.
The simpler and less stressful option is to sync your passkeys between devices.
If you don’t opt in to syncing and lose the device that contains your passkeys … your passkeys will be lost. But don’t worry! You’ll still have other options to access your accounts, like magic links. Once you’ve successfully signed in, the site or app should then give you the option to create a new passkey.
The simpler and less stressful option is to sync your passkeys between devices. With 1Password, you’ll soon be able to create, save, and access passkeys on any piece of hardware, alongside your passwords, credit cards, and other digital secrets.
Unlike a password, you can’t change your face or fingerprint. (Not easily, anyway!) With this in mind, you might be worried about the possibility of someone stealing your biometric data, and then using that to wreak havoc with your passkeys.
Researchers have proven that some Android phones can be fooled by a high-quality photo of the device’s owner. This has led to more Android devices with depth-sensing cameras and 3D mapping technology similar to the iPhone.
Depth mapping allows your device to turn a photo of your face into a mathematical representation that’s only ever stored locally, and never transmitted over the internet. For example, your Apple device stores biometric data encrypted with a key made available only to the Secure Enclave — a component built specifically to safeguard and process sensitive data.
An attacker would need physical access to your device and a flawless representation of your face or fingerprint.
Apps that offer biometric authentication never have direct access to that data. Instead, a request is sent to the Secure Enclave. It verifies your identity by ensuring the stored mathematical representation of your face matches the one currently being presented.
So, what does all this mean?
A theoretical attacker needs physical access to your device and a flawless representation of your face or fingerprint. Obtaining both is incredibly difficult.
The chances of someone breaking into the Secure Enclave area also extremely slim. And even if they did, they wouldn’t find a picture of your actual face.
The bottom line is that passkeys are safe and convenient for the vast majority of people. That’s why we’re so excited about the technology, and are working hard to make passkeys simple enough for everyone to use in their daily lives.
Of course, 1Password will continue to protect your traditional passwords. But we look forward to helping you create, store, and sync passkeys too, so you can live an even simpler, more secure life online.
Read the latest passkey announcements by 1Password, as well as helpful guides, explainers, and community chatter about passwordless authentication.
Subscribe to Beyond PasswordsThere’s one thing IT and security professionals can never have enough of: visibility. Now, 1Password Business customers can gain even greater visibility into their security posture with the upgraded Events API.
The enhanced Events API features full event parity with the 1Password Activity Log, both to expand your field of vision and to support your auditing efforts.
You can’t protect what you can’t see. With the original Events API, you could stream some 1Password events to your SIEM (Security Information and Event Management) tool.
Those 1Password events could then be incorporated into custom dashboards, alerts, visualizations, and search, for example, to give you a deeper understanding of how your team uses 1Password.
The Events API makes it easy to correlate and enrich 1Password events data to surface security insights that may require action. Think automated alerts for threat detection, and the ability to visualize 1Password usage.
That means you can monitor user adoption, set up alerts to be notified when a secret is shared, or aid investigations by correlating logins with suspicious events. All by streaming 1Password events to third-party SIEM tools using the 1Password Events API.
The original Events API included support for three event types: successful sign-in attempts, failed sign-in attempts, and item usage.
The enhanced Events API adds support for all events captured by the 1Password Activity Log, including:
With these additions, 1Password Business customers can combine 1Password events with data from their SIEM tool to:
Note that if you’re still using 1Password CLI 1.0 to retrieve auditing events, these Events API enhancements have replaced the audit command in CLI 1.0.
1Password Business customers can stream events directly from 1Password Events API to their SIEM tool today, either through pre-built integrations with Splunk (coming soon) or Elastic, or with a custom integration.
Want to start small? Try running a lightweight Python script to learn how to make calls to the Events API. Or dive into the documentation to get started with the 1Password Events API and your chosen SIEM tool.
Start sending 1Password account activity to your SIEM tool for deeper security insights.
Explore Events API documentationFor 17 years, we’ve prided ourselves on making 1Password a delight to use. But no product is perfect, and when I hear of someone getting stuck, I get curious. How can we fix it? How can we prevent that friction for future customers?
Today, we’re taking a step toward being able to better understand those moments by embarking on an internal, employee-only trial of our new in-app telemetry system. And, of course, we’re doing it the 1Password way – making sure it doesn’t compromise on our commitment to protecting your privacy and your data.
Here’s a quick summary of what’s happening:
1Password is beginning an internal test of our new, privacy-preserving in-app telemetry system. Initially, this functionality will be active only for 1Password employee accounts using the latest beta builds of the app.
No customer vault data can be seen or collected. We’re only interested in how people use the app itself, what features and screens they interact with – not what they store in their vaults, what sites they autofill on, or anything like that.
This data will be gathered from a randomized selection of accounts, de-identified, and processed in aggregate. This approach allows us to avoid associating telemetry data with individuals or accounts.
Customer accounts are not included for now. Once we’re confident it delivers on our privacy standards, we’ll announce a timeline for rolling telemetry out to customer accounts. At that point, we’ll also provide guidance on how you can opt out if you’d like to.
These days, we know that collecting “analytics” and “usage data” is often an excuse to invade your privacy, so I want to make this very clear: that’s not what’s happening here.
We have always bent over backwards to avoid collecting any unnecessary information about you in our systems. We believe you fundamentally can’t have security without privacy, and it’s always been our mission to deliver both. Nothing about that is changing.
So why add telemetry? Why now? We often remind our customers that they can’t protect what they can’t see. The same principle applies to understanding what product decisions to prioritize.
Over the years, we’ve relied on our own usage in conjunction with your feedback to inform our decision making. This presents a challenge, though: we don’t know when you run into trouble unless you tell us. And sure, we have an extensive user research program, and listen to all of the feedback you share online and in conversations with our team.
But there are millions of people using 1Password now, often in cool and innovative ways! If we’re going to keep improving 1Password, we can no longer rely on our own usage and your direct feedback alone.
That’s why we’ve been working hard to find a way to collect the information we need to make better decisions, without putting your data or privacy at risk. The goal is to equip ourselves with the visibility needed to ship updates that solve real problems and make 1Password better for everyone.
As our investigation into gathering app usage data unfolded, it became obvious that none of the off-the-shelf solutions were the right fit for 1Password. We needed a system that didn’t come at the expense of our customers’ privacy.
The approach we’ve landed on is designed to keep usage data from being attributable to individual people or accounts. It simply allows us to see where we aren’t living up to the high standards for user experiences you’ve come to expect. These additional signals will help us prioritize our efforts so we can deliver those great experiences faster, and more reliably.
Here’s the gist of how it works: we’ll be able to gather only a small set of general events and interactions within our apps. Things like when you unlock the app, when you create a new item (but not its contents!), or when you use autofill (but not what sites you use it on!).
Furthermore, this data will be de-identified through a variety of methods, starting with being collected from a randomized group of accounts. The gathered data is then processed in aggregate to provide general insights only.
This approach prevents us or anyone else from associating telemetry data with individuals or accounts.
And, of course, once this functionality rolls out to customers, you’ll be able to control whether or not telemetry is active on your account.
We want to be 100% certain we have this right before we consider rolling it out to customers. That’s why we’re testing it on our own accounts here at 1Password first.
Soon, the beta builds of our apps will include this new telemetry functionality. It only works on 1Password employees’ accounts, so there’s nothing you need to do at this stage. We just wanted to be transparent with you as these plans take shape.
We expect our testing and rollout to take some time, and we’ll let you know when we’re ready to roll things out to a wider group. In the meantime, if you have any questions or thoughts about this, please reach out and let us know.
As always, thank you for your continued trust and support. We don’t take it for granted, and we wouldn’t be where we are today without you.
Unlock with Okta has been available in public preview since February. Starting today, all 1Password Business customers can sign in to 1Password using Okta instead of their account password – and support for other SSO providers is coming soon.
People just aren’t built to juggle all the logins we use for work. IT departments spend so. much. time. on login-related issues that adopting 1Password reduces IT support tickets by 70%. That can save your IT team members 291 hours each every year – a $286,000 efficiency gain.
Single Sign-On (SSO) helps, too. SSO can reduce your attack surface, strengthen minimum security requirements, and reduce IT support costs. It’s also a better login experience for workers, giving them a single set of credentials to log in to every service covered by your SSO provider.
Now, you can combine 1Password and SSO to enforce stronger authentication policies, improve auditing capabilities, and give employees a simpler sign-in experience.
Together, Okta and 1Password further simplify and strengthen security – in a way that SSO, individually, can’t. While Okta protects logins for approved apps that you specifically add to Okta, 1Password protects virtually everything else.
That includes payment cards, sensitive documents, developer secrets, and logins not added to Okta. And it’s all weaved into a comprehensive enterprise security suite with granular admin controls, actionable insights, and extensive reporting.
When you use Unlock with Okta to access your 1Password account company-wide, you can:
Pairing 1Password with your existing identity and access management (IAM) infrastructure fills the gaps in your sign-on security model and secures your employees no matter how they sign in.
And because onboarding and offboarding are critical pieces of the puzzle, you can connect 1Password to your identity provider via the 1Password SCIM bridge to automate provisioning and deprovisioning.
It’s all done the 1Password way. Zero-knowledge architecture and end-to-end encryption are preserved, and decryption still happens on-device. Credentials are comprised of the same values traditionally derived from the account password and Secret Key, and are decrypted on employee devices – which means that, as always, we don’t store or have access to the keys we would need to decrypt your data.
We’ve gone into detail about the technical underpinnings of our approach to SSO, but here’s the bottom line. Because we’re using a trusted device model, even if your identity provider credentials are compromised, attackers still wouldn’t be able to access your 1Password data.
But the 1Password way is about more than uncompromising security. Great usability is a security feature – if it’s not easy to use, workers will find a workaround in their pursuit of productivity. So we’re not willing to sacrifice ease of use in the name of security. Instead, we find ways to enhance ease of use through security, and vice versa. SSO is no different.
For admins, setting up Unlock with Okta for your company is simple. You’ll notice a new “Unlock with Identity Provider” heading in the “Security” section of your admin dashboard. This is where you’ll manage the Okta configuration in 1Password.
Select Okta as your identity provider, enter your Okta account details, and test the connection. Once complete, you’ll see a “Successful Connection” notification.
Next, you can customize your rollout strategy. We recommend a staged rollout for most companies, but you have choices. Either select specific groups to start out and add more later, roll out Unlock with Okta to everyone except guests, or roll it out to everyone at once.
You can also choose the length of time you’d like to give employees to complete the migration. Once the period of time you select runs its course, all employees included in the rollout will be required to use Okta to sign in to 1Password.
Prior to that, they can continue to sign in using their account password and Secret Key. Each employee included in the rollout will receive an email notification with those details, along with a prompt directly within 1Password 8 to begin making the switch.
When your admin enables Unlock with Okta, you’ll see a welcome screen the next time you log in to 1Password on any device using your account password. To add your first trusted device, follow the steps outlined on the welcome screen to sign in to your Okta account.
Once that’s done, you’ll see a confirmation that your device has been registered successfully. From that point on, you’ll use Okta to sign in to your 1Password account on that device.
Once you’ve registered your first trusted device, you can use it to authenticate additional devices. When you add an account from Settings, you’ll see a notification that the account you’re signing in to now requires you to sign in with Okta.
As you follow the onscreen instructions, a notification will appear on your first trusted device (if you allowed notifications during the initial setup), alerting you to the fact that a new device is trying to use your 1Password account.
You’ll also see a new, one-time code appear on your trusted device. Enter that one-time code on the unregistered device to confirm it as a trusted device. From then on, you’ll sign in to 1Password with Okta on that device.
Unlock with Okta is the best of both worlds. Workers have a simple way to access everything they’ve stored in 1Password, using a single set of credentials they already know. Your company gets streamlined security policies, simplified administration and onboarding, and full control over – and visibility into – how employees use their 1Password accounts.
Not using Okta? Stay tuned. Unlock with Azure is now in private preview, and you can get a sneak peek in the attached setup video. We’ll be rolling out support for additional SSO providers like Duo in the near future.
For a deeper dive into Unlock with Okta, join CPO Steve Won, Product Manager Yash Kaur, and Airwallex Senior IT Engineer David Baverstock for a complete walkthrough on March 29 at 9AM PT / 12PM ET.
And if you’re considering switching your business to 1Password, a quick reminder: when you make the move, we’ll help cover the cost.
Save your seat for a walkthrough of Unlock with Okta with 1Password CTO Steve Won, Senior Product Manager Yashpreet Kaur, and special guest, 1Password customer and Airwallex IT Engineer David Baverstock on March 29, 2023.
Register nowDISCLAIMER
* Copyrights belong to each article's respective author.
** Although this page should be free from tracking and other hazards, I can't guarantee that, after you click any links to external websites.
Generative artificial intelligence is hitting the world of search and browsing in a big way. At DuckDuckGo, we’ve been trying to understand the difference between what it could do well in the future and what it can do well right now. But no matter how we decide to use this new technology, we want it to add clear value to our private search and browsing experience.
Today, we’re giving all users of DuckDuckGo’s browsing apps and browser extensions the first public look at DuckAssist, a new beta Instant Answer in our search results. If you enter a question that can be answered by Wikipedia into our search box, DuckAssist may appear and use AI natural language technology to anonymously generate a brief, sourced summary of what it finds in Wikipedia — right above our regular private search results. It’s completely free and private itself, with no sign-up required, and it’s available right now.
This is the first in a series of generative AI-assisted features we hope to roll out in the coming months. We wanted DuckAssist to be the first because we think it can immediately help users find answers to what they are looking for faster. And, if this DuckAssist trial goes well, we will roll it out to all DuckDuckGo search users in the coming weeks.
DuckAssist is available to try right now through our browsing apps and browser extensions
DuckAssist is a new type of Instant Answer in our search results, just like News, Maps, Weather, and many others we already have. We designed DuckAssist to be fully integrated into DuckDuckGo Private Search, mirroring the look and feel of our traditional search results, so while the AI-generated content is new, we hope using DuckAssist feels second nature.
DuckAssist answers questions by scanning a specific set of sources — for now that’s usually Wikipedia, and occasionally related sites like Britannica — using DuckDuckGo’s active indexing. Because we’re using natural language technology from OpenAI and Anthropic to summarize what we find in Wikipedia, these answers should be more directly responsive to your actual question than traditional search results or other Instant Answers.
For this initial trial, DuckAssist is most likely to appear in our search results when users search for questions that have straightforward answers in Wikipedia. Think questions like “what is a search engine index?” rather than more subjective questions like “what is the best search engine?”. We are using the most recent full Wikipedia download available, which is at most a few weeks old. This means DuckAssist will not appear for questions more recent than that, at least for the time being. For those questions, our existing search results page does a better job of surfacing helpful information.
As a result, you shouldn’t expect to see DuckAssist on many of your searches yet. But the combination of generative AI and Wikipedia in DuckAssist means we can vastly increase the number of Instant Answers we can provide, and when it does pop up, it will likely help you find the information you want faster than ever.
DuckAssist joins many other Instant Answers on DuckDuckGo’s private search results
Generative AI technology is designed to generate text in response to any prompt, regardless of whether it “knows” the answer or not. However, by asking DuckAssist to only summarize information from Wikipedia and related sources, the probability that it will “hallucinate” — that is, just make something up — is greatly diminished. In all cases though, a source link, usually a Wikipedia article, will be linked below the summary, often pointing you to a specific section within that article so you can learn more.
Nonetheless, DuckAssist won’t generate accurate answers all of the time. We fully expect it to make mistakes. Because there’s a limit to the amount of information the feature can summarize, we use the specific sentences in Wikipedia we think are the most relevant; inaccuracies can happen if our relevancy function is off, unintentionally omitting key sentences, or if there’s an underlying error in the source material given. DuckAssist may also make mistakes when answering especially complex questions, simply because it would be difficult for any tool to summarize answers in those instances. That’s why it’s so important for our users to share feedback during this beta phase: there’s an anonymous feedback link next to all DuckAssist answers where you can let us know about any problems, so we can identify where things aren’t working well and take quick steps to make improvements.
DuckAssist is anonymous, with no logging in required. It’s a fully integrated part of DuckDuckGo Private Search, which is also free and anonymous. We don’t save or share your search or browsing history when you search on DuckDuckGo or use our browsing apps or browser extensions, and searches with DuckAssist are no exception. We also keep your search and browsing history anonymous to our search content partners — in this case, OpenAI and Anthropic, used for summarizing the Wikipedia sentences we identify. As with all other third parties we work with, we do not share any personally identifiable information like your IP address. Additionally, our anonymous queries will not be used to train their AI models. And anything you share via the anonymous feedback link goes to us and us alone.
If DuckAssist has already answered a question on the same topic, its response will appear automatically
We’ve used Wikipedia for many years as the primary source for our “knowledge graph” Instant Answers, and, while we know it isn’t perfect, Wikipedia is relatively reliable across a wide variety of subjects. Because it’s a public resource with a transparent editorial process that cites all the sources used in an article, you can easily trace exactly where its information is coming from. Finally, since Wikipedia is always being updated, DuckAssist answers can reflect recent understanding of a given topic: right now our DuckAssist Wikipedia index is at most a few weeks old, and we have plans to make it even more recent. We also have plans to add more sources soon; you may already see some signs of that in your results!
• Phrasing your search query as a question makes DuckAssist more likely to appear in search results.
• If you’re fairly confident that Wikipedia has the answer to your query, adding the word “wiki” to your search also makes DuckAssist more likely to appear in search results.
• For now, the DuckAssist beta is only available in English in our browsing apps (iOS, Android, and Mac) and browser extensions (Firefox, Chrome, and Safari). If the trial goes well, we plan to roll it out to all DuckDuckGo search users soon.
• If you don’t want DuckAssist to appear in search results, you can disable “Instant Answers” in search settings. (Note: this will disable all Instant Answers, not just DuckAssist.)
• If DuckAssist has generated an answer for a given topic before, the answer will appear automatically. Otherwise, you can click the ‘Ask’ button to have an answer generated for you in real time.
2022 marks DuckDuckGo's twelfth year of donations—our annual program to support organizations that share our vision of raising the standard of trust online. This year, we're proud to donate to a diverse selection of organizations across the globe that strive for better privacy, digital
2022 marks DuckDuckGo's twelfth year of donations—our annual program to support organizations that share our vision of raising the standard of trust online. This year, we're proud to donate to a diverse selection of organizations across the globe that strive for better privacy, digital rights, greater competition in online markets, and access to information free from algorithmic bias.
This year, we've been able to increase our donation amount to $1,100,000, bringing the total over the past decade to $4,750,000. Everyone using the Internet deserves simple and accessible online protection; these organizations are all pushing to make that a reality. We encourage you to check out their valuable work below, alongside details about how our funds were allocated this year.
$125,000 to the Electronic Frontier Foundation (EFF)
"EFF is an essential champion of user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development--and has been since our founding in 1990."
$125,000 to Fight for the Future
"Fight for the Future harnesses the power of the Internet to channel outrage into action, defending our most basic rights in the digital age. They fight to ensure that technology is a force for empowerment, free expression, and liberation rather than tyranny, corruption, and structural inequality."
$125,000 to The Markup
"The Markup is a nonprofit newsroom that investigates how powerful institutions are using technology to change our society."
$125,000 to Public Knowledge
"Public Knowledge promotes freedom of expression, an open internet, and access to affordable communications tools and creative works. We work to shape policy on behalf of the public interest."
$125,000 to Signal
"Signal Technology Foundation develops open source privacy technology that protects free expression and enables secure global communication."
$25,000 to Access Now
"Access Now defends and extends the digital rights of people and communities at risk by combining direct technical support, strategic advocacy, grassroots grantmaking, and convenings such as RightsCon."
$25,000 to Algorithmic Justice League
"AJL's current mission is to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms."
$25,000 to Article19
"Established in 1987, ARTICLE 19 is an international think-do organization that defends freedom of expression, fights against censorship, protects dissenting voices, and advocates against laws and practices that silence individuals, both online and offline."
$25,000 to the Australia Institute's Centre for Responsible Technology
"The Australia Institute’s Centre for Responsible Technology develops public policy and research that advocate for a fairer and healthier online experience and gives back agency to individuals in our networked world."
$25,000 to Bits of Freedom
"Bits of Freedom shapes internet policy in the Netherlands and Brussels through advocacy, campaigning and litigation, because we believe in an open and just society, in which people can hold power accountable and effectively question the status quo."
$25,000 to the British Institute for International and Comparative Law
"The Competition Law Forum is a centre of excellence for European competition and antitrust policy and law at the British Institute of International and Comparative Law (BIICL)."
$25,000 to the Center for Critical Internet Inquiry
“C2i2 is a critical internet studies research center and community, committed to social justice, policy and human rights.”
$25,000 to the Detroit Community Technology Project (DCTP)
"Detroit Community Technology Project builds healthy digital ecosystems by training Digital Stewards and supporting the development of community governed internet networks."
$25,000 to European Digital Rights (EDRi)
"The EDRi network is a dynamic and resilient collective of NGOs, experts, advocates and academics working to defend and advance digital rights across the continent - for almost two decades, it has served as the backbone of the digital rights movement in Europe."
$25,000 to Freiheitsrechte (GFF)
"The GFF (Gesellschaft für Freiheitsrechte / Society for Civil Rights) is a Berlin-based non-profit NGO founded in 2015. Its mission is to establish a sustainable structure for successful strategic litigation in the area of human and civil rights in Germany and Europe."
$25,000 to the Internet Economy Foundation (IE.F)
"The IE.F is an independent think-tank based in Berlin that is dedicated to ensuring fair competition in the Internet economy and fostering a vibrant European digital ecosystem."
$25,000 to OpenMedia
"OpenMedia works to keep the Internet open, affordable, and surveillance-free. We create community-driven campaigns to engage, educate, and empower people to safeguard the Internet."
$25,000 to the Open Rights Group
"Open Rights Group (ORG) is a UK-based digital campaigning organisation working to protect our rights to privacy and free speech online."
$25,000 to the Open Source Technology Improvement Fund (OSTIF)
"OSTIF, or The Open Source Technology Improvement Fund, is a corporate non-profit dedicated to improving the security of critical open-source projects. This is done mainly by facilitating and managing security reviews and associated work for projects and organizations. In the last year, OSTIF was responsible for the identifying and fixing of more than 50 critical and high severity vulnerabilities and 250 more bug fixes in widely adopted projects."
$25,000 to Privacy Rights Clearinghouse
"Privacy Rights Clearinghouse works to make data privacy more accessible to all by empowering people and advocating for positive change."
$25,000 to Restore the Fourth
"Restore the Fourth is a grassroots, volunteer-run, nonpartisan civil liberties group that opposes mass government surveillance, protects privacy, and promotes the Fourth Amendment."
$25,000 to the Surveillance Technology Oversight Project (STOP)
"The Surveillance Technology Oversight Project (S.T.O.P.) advocates and litigates for privacy, working to abolish local governments’ systems of discriminatory mass surveillance."
$25,000 to the Technology Oversight Project
"Through engaging with lawmakers, exposing false narratives and bad actors, and pushing for landmark legislation, The Tech Oversight Project seeks to hold tech giants accountable for their anti-competitive, corrupting, and corrosive influence on our society and the levers of power."
$25,000 to the Tor Project
"At the Tor Project, we believe everyone should be able to explore the internet with privacy. We advance human rights and defend your privacy online through free, open source software and the decentralized Tor network."
App Tracking Protection for Android is launching into open beta today. It's a free feature in the DuckDuckGo Android app that helps block 3rd-party trackers in the apps on your phone (like Google snooping in your weather app) – meaning more comprehensive privacy and less creepy targeting.
With the App Tracking Protection 'Activity Report', you can see which 3rd-parties are trying to track you.
You may have heard of Apple’s App Tracking Transparency (ATT), a feature for iPhones and iPads that asks users whether they want to allow third-party app tracking or not in each of their apps (with the majority of people choosing “not”). But most smartphone users worldwide actually use Android. So, we’re offering Android users something even more powerful: enable our App Tracking Protection and we'll automatically block all the hidden trackers we can identify as blockable across your apps.
App Tracking Protection beta users have been surprised to see how many tracking attempts the feature is blocking.
The Trouble with App Trackers
The average Android user has 35 apps on their phone. Through our testing, we’ve found that a phone with 35 apps can experience between 1,000-2,000 tracking attempts every day and contact 70+ different tracking companies.
Imagine you’re spending a lazy Sunday afternoon playing around with apps on your phone; keeping an eye on flight prices for a getaway (Southwest Airlines app), checking out a house your friend has been raving about (Zillow app), seeing if those concert tickets have gone on sale yet (SeatGeek app), and checking the weather (Weather Network app).
Within these four apps alone, 45+ tracking companies are known to collect personal data like your precise location, email address, phone number, time zone, and a fingerprint of your device (like screen resolution, device make and model, language, local internet provider, etc.) that can be used to identify you. With App Tracking Protection, you can now see exactly what the trackers are typically trying to collect, which we're helping block from happening.
In the Android app, when you use App Tracking Protection, you can see the personal data we're blocking 3rd-party trackers from getting.
But what are they doing with all that information? Personal data companies like Facebook and Google use that information to build a profile that advertisers and content-targeting companies use to influence what you see online.
You could get ads about your mom’s toothpaste brand after spending time at her house (no, not a coincidence – check out this thread), be bombarded with pregnancy-related ads and content after pregnancy loss or see drug-related ads or articles about diseases you learned about on WebMD. The examples are endless. It can feel like you're being listened to, but in reality it’s not that someone is listening to your conversations, it's that your activity is being relentlessly tracked and analyzed!
The problems with all this information collection go way beyond so-called “relevant” (aka creepy) advertising and targeting. Tracking networks can sell your data to other companies like data brokers, advertisers, and governments, resulting in more substantial harms like ideological manipulation, discrimination, personal price manipulation, polarization, and more.
DuckDuckGo for Android, our all-in-one privacy solution, can help. Our app was already protecting you across search, browsing, and email. Now, with App Tracking Protection, you’re getting a lot of protection from 3rd-party app trackers, too.
How App Tracking Protection for Android Works
When App Tracking Protection is enabled, it will detect when other apps on your phone are about to send data to any of the 3rd-party tracking companies in our app tracker dataset, and block most of those requests. And that’s it! You can continue to use your apps as usual, and App Tracking Protection works in the background to block trackers whenever it finds them, even while you sleep.
The DuckDuckGo app on Android also offers a real-time view of App Tracking Protection’s results, including which tracking network is associated with each app and what data they're known to collect. If you have notifications on, you’ll also get automatic summaries if you want them.
To keep you up-to-date, we send automatic summaries about the app tracker blocking happening behind the scenes.
App Tracking Protection uses a local “VPN connection,” which means that it works its magic right on your smartphone and without sending app data to DuckDuckGo or other remote servers. That is, App Tracking Protection does not route your app data through external companies (including ours).
We Still Want to Hear from You!
As we work through the beta phase, there are a small number of apps being excluded because they rely on tracking to work properly, like browsers and apps with in-app browsers. Throughout the waitlist period, we've reduced this number by half and also dropped the exclusion for games. We look forward to reducing this list even more.
To send us general feedback or report issues with the DuckDuckGo app: open Settings > Share Feedback (in the Other section). If you run into issues with another app on your smartphone as a result of App Tracking Protection, you can disable protection for just that app under "Having Problems With An App". You'll then be asked to give details of the problem you experienced. Your feedback greatly helps our team continue improving App Tracking Protection and we appreciate it!
Get Started:
To get access to the beta of App Tracking Protection, find it in your settings.
Signing up is easy! Here are four of the simple steps to automatic app tracker blocking.
Forget going “incognito” with other browsers that don’t actually deliver substantive web tracking protection; you deserve privacy all the time, with built-in protections that make the Internet less creepy and less cluttered. Equipped with new and improved features for everyday use, DuckDuckGo for Mac is here to clean up the web as you browse. (And yes, you can import all your passwords and bookmarks from other browsers and password managers – so switching is quick and easy!)
The privacy protections built into DuckDuckGo for Mac add up to a better user experience; by blocking trackers before they load, for example, DuckDuckGo for Mac uses about 60% less data than Chrome. The desktop app includes the built-in privacy protections you know and trust from our mobile apps – which now see over 50M downloads a year – including multiple layers of defense against third-party trackers, secure link upgrading with Smarter Encryption, and our Fire Button to instantly clear recent browsing data. An all-in-one app that aims to be the “easy button” for privacy, DuckDuckGo for Mac has no fiddly privacy settings to adjust – our foundational protections are on by default, so you can get back to browsing.
Since announcing the waitlist beta in April, we’ve been listening to beta testers’ feedback and making even more improvements to meet your needs. We added a bookmarks bar, pinned tabs, and a way to view your locally stored browsing history. Our Cookie Consent Pop-Up Manager can now handle cookie pop-ups on significantly more sites, automatically choosing the most private option and sparing you from annoying interruptions.
Keep pop-ups at bay with our automatic cookie consent manager
The app also lets you activate DuckDuckGo Email Protection on desktop, protecting your inbox with email tracker blocking and private @duck.com addresses. While we work on browser extension support that meets our high standards of privacy and quality, we’re building in more features that meet the same needs as the most popular extensions: ad-blocking and secure password management. These new features will become available across our other platforms in the near future.
Cleaning up YouTube with Duck Player – fewer creepy ads, fewer distractions: Want a more-private way to watch YouTube videos in peace? Duck Player protects you from targeted ads and cookies with a distraction-free interface that incorporates YouTube’s strictest privacy settings for embedded video. Any ads you see within Duck Player will not be personalized; in our testing, this prevented ads on most videos altogether. YouTube still registers your views, so it’s not totally anonymous, but none of the videos you watch in Duck Player contribute to your YouTube advertising profile or suggest distracting personalized recommendations. The feature can be always-on, ready to go whenever you click a YouTube link, or you can opt in on specific videos – perfect for when you’re sharing your screen, using a shared device, or just trying to stay focused. It’s equally easy to get back to the default version of YouTube whenever you want.
Open YouTube links in Duck Player for more-private viewing
Eliminating invasive ads as you browse: DuckDuckGo for Mac has always blocked invasive trackers before they load, effectively eliminating the ads that rely on that creepy tracking. (Because so many ads work that way, you’ll see way fewer ads.) Today, we’ve made another big improvement: we’re cleaning up the whitespace left behind by those ads for an efficient, distraction-free look without the need for a separate ad blocker.
More choices for secure password management: Our browser includes our own secure and easy-to-use password manager that can automatically remember and fill in login credentials and suggest random passwords for new logins. (It can also securely save addresses and payment methods.) Our autofill experience is continually improving and will roll out on our mobile apps soon.
This works for most users, especially since you can import passwords. But we understand some folks want to continue using third-party password management across browsers and devices. So, we’ve teamed up with Bitwarden, the accessible open-source password manager, in the first of what we hope to be several similar integrations. In the coming weeks, Bitwarden users will be able to activate this seamless two-way integration in their browser settings. DuckDuckGo for Mac is also compatible with 1Password’s new universal autofill feature.
Easily autofill your Bitwarden passwords in DuckDuckGo for Mac
“The DuckDuckGo browser has been a breath of fresh air, a lightweight and snappy browser that isn't a gamified gimmick and doesn’t sell my browsing history to advertisers. Its clean and familiar UI allowed me to switch with no hassle. I would definitely recommend more people switching as soon as they can.”
“The automatic cookie settings feature is awesome!!!”
“I love the UI of this app! Very clean and minimalist. Also, it really is blazing fast. I appreciate the careful consideration into design and performance with the use of the internal rendering engine. Thank you for all your work!”
“DuckDuckGo is replacing Google Chrome on my Mac and I love it.”
“I’ve been using [DuckDuckGo for Mac] for several months and I have to say, I love the simplicity and privacy. We’ve tossed a lot of stuff into browsers over the years to get privacy and speed. This achieves both with much less.”
We built DuckDuckGo for Mac with privacy, security, and simplicity in mind. Our default privacy settings are stronger than what most other browsers offer, and you don’t need to sift through obscure menus to turn them on. DuckDuckGo for Mac is not a “fork” of Chromium, or any other browser code. All the app code – tab and bookmark management, our new tab page, our password manager, etc. – is written by our own engineers. For rendering, it uses a public macOS API, making it super compatible with Mac devices. DuckDuckGo believes in open sourcing our apps and extensions whenever possible, and we plan to do so for DuckDuckGo for Mac before it moves out of beta.
We’re proud of how far DuckDuckGo for Mac has come in this short time, and it will only get better from here! Users will soon be able to sync DuckDuckGo bookmarks and passwords across devices. We’ll also be adding more built-in features that offer native alternatives to more popular extensions. Please keep the feedback coming; we're listening! (You can find the feedback form in the app's three-dot menu, right under the Fire Button.)
Before you ask, yes, our Windows browser is still on the way! DuckDuckGo for Windows is in an early friends and family beta, with a private waitlist beta expected in the coming months. (Right now, Mac and Windows are the only desktop platforms we’re focusing on.) Stay tuned for updates. And if you’re interested in working on our desktop apps, we’re hiring remotely, worldwide.
On Tuesday September 13th, 13 privacy-focused technology companies representing more than 100 million users in the United States published a letter to U.S. Congressional Leadership imploring them to support the American Innovation and Choice Online Act (AICOA) and bring it to a floor vote as soon as possible.
Incessant data collection and tech monopolies are inherently linked: the more data they collect and use to influence user decision making, the stronger their grip on industry becomes, leaving users feeling like they have no option but to accept a lack of privacy to use the Internet. However, users do have choices when it comes to the services they use, and they do not have to accept services that have made it their business to abuse user privacy. If the American Innovation and Choice Online Act (AICOA) becomes law, millions of Americans will have better access to Internet services with more privacy and less data-driven targeting and manipulation.
U.S. Senator Chuck Schumer U.S. Senator Mitch McConnell
Senate Majority Leader Senate Minority Leader
U.S. Senator Dick Durbin U.S. Senator John Thune
Senate Majority Whip Senate Minority Whip
U.S. Representative Nancy Pelosi U.S. Representative Kevin McCarthy
Speaker of the House House Minority Leader
U.S. Representative Steny Hoyer U.S. Representative Steve Scalise
House Majority Leader House Minority Whip
RE: Support for S. 2992/H.R. 3816, The American Innovation and Choice Online Act.
Dear U.S. Congressional Leadership:
We, the undersigned privacy companies and organizations, urge Congress to schedule floor votes for the American Innovation and Choice Online Act (AICOA) as soon as possible. This bill has been delayed for far too long and the American public deserves the kind of innovative online ecosystem it would create.
Our companies and organizations offer privacy protective alternatives to the services provided by dominant technology companies. While more and more Americans are embracing privacy-first technologies, some dominant firms still use their gatekeeper power to limit competition and restrict user choice. We implore you to pass AICOA as it would remove barriers for consumers to freely select privacy protective services.
Massive tech platforms can exert influence over society and the digital economy because they ultimately have the power to collect, analyze, and monetize exorbitant amounts of personal information. This is not by accident, as some of the tech giants have intentionally abused their gatekeeper positions to lock users into perpetual surveillance while simultaneously making it difficult to switch to privacy-protective alternatives. These monopolist firms: use manipulative design tactics to steer individuals away from rival services; restrict the ability of competitors to interoperate on the platform; use non-public data to benefit their services or products; and make it impossible or complicated for users to change their default settings or uninstall apps. Such tactics deprive consumers of the innovative offerings an open and vibrant market would yield.
Passage of AICOA is critical to protecting the privacy of American consumers. These self-preferencing tactics keep consumers stuck in an ecosystem of constant tracking by making it needlessly difficult for users to choose alternative privacy-respecting products and services. This is not how a truly free market operates, which is why commonsense reforms are necessary to combat the most egregious anticompetitive tactics and spur innovation that will increase the options available to American consumers. That’s why we support the AICOA and ask that it be scheduled for a vote. The AICOA will improve the internet in many ways and, most importantly, remove barriers that have been erected to block Americans from enjoying more privacy online.
Sincerely,
Andi
Brave
Disconnect
DuckDuckGo
Efani Secure Mobile
Fathom Analytics
Malloc
Mozilla
Neeva
Proton
Skiff
Thexyz Inc.
Tutanota
You.com
[Post updated December 19th, 2022 to reflect the addition of Skiff.]
Why Block Email Trackers or Hide Your Email Address?
Have you ever entered your email for a loyalty program or coupon and started getting emails from companies you didn’t subscribe to? Or noticed ads following you around after clicking on an email link? You’re not alone! There are multiple ways companies can use your email to track you, target you with ads, and influence what you see online. They can even share your personal information with third parties – all without your knowledge.
Companies embed trackers in images and links within email messages, letting them collect information like when you’ve opened a message, where you were when you opened it, and what device you were using. In our closed Email Protection beta, we found that approximately 85% of beta testers’ emails contained hidden email trackers! Very sneaky. Companies can use this information to build a profile about you.
And because your email addresses are connected to so much of what you do online – making purchases, using social media, and more – tracking companies can also effectively use your personal email address as a profiling identifier. In fact, many companies are so hungry for your personal email address that they’ll actually pull it from online forms you haven’t even submitted yet! Beyond sending you more emails, companies often upload your email address to Facebook and Google to target you with creepy ads across apps and websites.
Reintroducing DuckDuckGo Email Protection (Beta)
DuckDuckGo Email Protection is a free email forwarding service that removes multiple types of hidden email trackers and lets you create unlimited unique private email addresses on the fly. You can use Email Protection with your current email provider and app – no need to update your contacts or juggle multiple accounts. Email Protection works seamlessly in the background to deliver your more-private emails right to your inbox.
Signing up for Email Protection gives you the ability to create Duck Addresses. There are two types that help protect your email privacy:
Many users have loved the Email Protection beta so far, with millions of more-private emails being forwarded weekly. It’s email privacy, simplified – and we’re thrilled to open the beta for everyone to try it out!
Updates to DuckDuckGo Email Protection
Since launching DuckDuckGo Email Protection into private waitlist beta, we’ve been continuously making improvements based on feedback.
Link Tracking Protection: In addition to blocking trackers in images, scripts, and other media directly embedded in emails, we can now detect and remove a growing number of the trackers embedded in email links.
Smarter Encryption: We’ve started using the same Smarter Encryption (HTTPS Upgrading) that’s at work in our search engine and apps to upgrade insecure (unencrypted, HTTP) links in emails to secure (encrypted, HTTPS) links when they’re on our upgradable list.
Replying from your Duck Addresses: You can now reply to emails from all your Duck Addresses. When you get an email to a Duck Address, you can just hit ‘Reply,’ type your message, and send it off. Your email will then be delivered from your Duck Address instead of your personal address.
Self-Service Dashboard: Want to update your forwarding address? Or even delete your account? You can now make changes to your Duck account whenever you want, saving you time and effort.
How People are Using Email Protection
Wondering how this feature works in the real world? Here’s what our beta testers had to say:
Getting Started
Email Protection is supported in the DuckDuckGo Privacy Browser for iOS and Android, DuckDuckGo for Mac (beta), and DuckDuckGo Privacy Essentials browser extensions for Firefox, Chrome, Edge, and Brave.
Once you follow the steps to create your Personal Duck Address, you’re all set to start using it right away! And while browsing, look for Dax the Duck (our mascot) to help you autofill your Personal Duck Address or generate a Private Duck Address for you on the fly.
Like all our features, DuckDuckGo Email Protection will never track you. We believe that your emails are none of our business! When your Duck Addresses receive an email, we immediately apply our tracking protections and then forward it to you, never saving it on our systems. Sender information, subject lines...we don’t track any of it. (Learn more in our Privacy Guarantees)
Additionally, we are committed to Email Protection for the long term, so you can feel confident about using your Duck Addresses. During the private beta, we’ve been shoring up our backend systems to support millions of users. And as we move out of beta, we'll also be incorporating our email tracker dataset into our open source Tracker Radar.
So give Email Protection a try and let us know what you think! We look forward to helping you protect your inbox.
Our vision at DuckDuckGo is to raise the standard of trust online. Raising that standard means maximizing the privacy we offer by default, being transparent about how our privacy protections work, and doing our best to make the Internet less creepy. Recently, I’ve heard from a number of users and understand that we didn’t meet their expectations around one of our browser’s web tracking protections. So today we are announcing more privacy and transparency around DuckDuckGo’s web tracking protections.
More Privacy: Expanding 3rd-Party Tracker Loading Protection to Include Microsoft
Over the next week, we will expand the third-party tracking scripts we block from loading on websites to include scripts from Microsoft in our browsing apps (iOS and Android) and our browser extensions (Chrome, Firefox, Safari, Edge and Opera), with beta apps to follow in the coming month. This expands our 3rd-Party Tracker Loading Protection, which blocks identified tracking scripts from Facebook, Google, and other companies from loading on third-party websites, to now include third-party Microsoft tracking scripts. This web tracking protection is not offered by most other popular browsers by default and sits on top of many other DuckDuckGo protections. We explain how this works differently with DuckDuckGo advertising below.
Websites often embed scripts from other companies (commonly called “third-party scripts”) that automatically load when you visit their site. For example, the most prevalent third-party script is Google Analytics, which helps websites understand how their sites are being used. But typically Google can also use this information to profile you outside of the site where the information originated. Most browsers’ default tracking protection focuses on cookie and fingerprinting protections that only restrict third-party tracking scripts after they load in your browser. Unfortunately, that level of protection leaves information like your IP address and other identifiers sent with loading requests vulnerable to profiling. Our 3rd-Party Tracker Loading Protection helps address this vulnerability, by stopping most 3rd-party trackers from loading in the first place, providing significantly more protection.
Previously, we were limited in how we could apply our 3rd-Party Tracker Loading Protection on Microsoft tracking scripts due to a policy requirement related to our use of Bing as a source for our private search results. We’re glad this is no longer the case. We have not had, and do not have, any similar limitation with any other company.
Microsoft scripts were never embedded in our search engine or apps, which do not track you. Websites insert these scripts for their own purposes, and so they never sent any information to DuckDuckGo. Since we were already restricting Microsoft tracking through our other web tracking protections, like blocking Microsoft’s third-party cookies in our browsers, this update means we’re now doing much more to block trackers than most other browsers.
DuckDuckGo Advertising: Working Toward Private Ad Conversions
Advertising on DuckDuckGo is done in partnership with Microsoft. Viewing ads on DuckDuckGo is anonymous, and Microsoft has committed to not profile our users on ad clicks: “when you click on a Microsoft-provided ad that appears on DuckDuckGo, Microsoft Advertising does not associate your ad-click behavior with a user profile. It also does not store or share that information other than for accounting purposes.”
To evaluate whether an ad on DuckDuckGo is effective, advertisers want to know if their ad clicks turn into purchases (conversions). To see this within Microsoft Advertising, they use Microsoft scripts from the bat.bing.com domain. Currently, if an advertiser wants to detect conversions for their own ads that are shown on DuckDuckGo, 3rd-Party Tracker Loading Protection will not block bat.bing.com requests from loading on the advertiser’s website following DuckDuckGo ad clicks, but these requests are blocked in all other contexts. For anyone who wants to avoid this, it's possible to disable ads in DuckDuckGo search settings.
To eventually replace the reliance on bat.bing.com for evaluating ad effectiveness, we’ve started working on an architecture for private ad conversions that can be externally validated as non-profiling. DuckDuckGo isn’t alone in trying to solve this issue; Safari is working on Private Click Measurement (PCM) and Firefox is working on Interoperable Private Attribution (IPA). We hope these efforts can help move the entire digital ad industry forward to making privacy the default. We think this work is important because it means we can improve the advertising-based business model that countless companies rely on to provide free services, making it more private instead of throwing it out entirely.
More Transparency: Public Block List & New Web Tracking Protections Help Page
Our browser extensions and non-beta apps are already open source, as is our Tracker Radar – the data set of trackers and other third-party web activity we identify through crawling. We’ve now also made our tracker protection list publicly available, so folks can see for themselves what we’re blocking and report any issues. We’ve also updated the Privacy Dashboard within our apps and extensions to show more information about third-party requests. Using the updated Privacy Dashboard, users can see which third-party requests have been blocked from loading and which other third-party requests have loaded, with reasons for both when available.
To further deliver on our commitment to transparency, we’ve posted a new help page that offers a comprehensive explanation of all the web tracking protections we provide across platforms. Users now have one place to look if they want to understand the different kinds of web privacy protections we offer on the platforms they use. This page also explains how different web tracking protections are offered based on what is technically possible on each platform, as well as what’s in development for this part of our product roadmap.
I’ve been building DuckDuckGo as an independent company for almost 15 years. After all this time, I believe more than ever that the majority of people online would choose to be more private if they could press a privacy “easy button.” That’s why our product vision is to pack as much privacy as we can into one package. We’re committed for the long haul to make simple privacy protection available to all, and will continue striving to strengthen the quality, understanding, and confidence in our product.
Governments, researchers, and policy makers need accurate market share data to evaluate search engine market diversity (or lack thereof). As explained by our series of posts on search engine choice screens (also known as preference menus), a well-designed choice screen could significantly increase competition and give users meaningful choice and control. However, without accurate search market share data, it is difficult to assess whether a particular choice screen is effective overall or to ensure consumers are presented with the search engines they want to use.
Common sources of search market share data, like the often-cited comScore and Statcounter, vary significantly for non-Google search engines which creates confusion around search engine market share. Additionally, both these and other commonly cited sources have significant methodological deficiencies. In short, comScore suffers from panel selection bias, e.g., privacy-conscious users are unlikely to agree to be surveilled by comScore and Statcounter’s core flaw is that it uses trackers, which are often blocked by tracker-blocking tools, either by search engine apps and extensions (like ours) or by other common apps and browser extensions. And both comScore and Statcounter reports are further flawed because they either do not report and/or do not have a sufficiently large and representative sample of users across all major markets and platforms.
Recently, two new market share reports were released by Cloudflare and Wikipedia respectively. Unlike comScore, Cloudflare’s and Wikipedia’s reports do not suffer from panel selection bias since they are not based on panels but instead based on traffic referred to Cloudflare-hosted websites and Wikipedia, respectively. And unlike Statcounter, this method also means Cloudflare’s and Wikipedia’s data is not affected by tracker-blocking tools. While Wikipedia is just one site, Cloudflare’s report is based on a large swath of the global Internet (25% of the top million websites use Cloudflare) so sample size isn’t a problem.
For these reasons, we recommend Cloudflare's report as currently the best source for baseline assessments of search engine market share and for assessing the effect of competition interventions like search preference menus. Wikipedia’s report is also useful because it can be analyzed in unique ways (more on both reports below). However, despite the methodological differences between all these reports, all still show that Google dominates the search engine market.
Cloudflare’s search market share report
Cloudflare's report is based on referrer data from search engine link clicks. When you click on a link from a search engine and visit that website, the site will know which search engine domain the user came from (using referrer information, e.g., duckduckgo.com). This report is made possible through Cloudflare Radar, a free public tool that lets anyone view global traffic as well as security trends and insights across the Internet as they happen. Cloudflare Radar is powered by the aggregated traffic flowing through the Cloudflare network. Radar insights like these are created by looking at patterns derived from aggregated data that has been anonymized, and so does not contain any search queries or personal information. (To be clear, that means that if you click on a link for a Cloudflare-supported site from DuckDuckGo, your referrer information does not reveal your search query or any personal information about you.)
Cloudflare’s report is updated quarterly, and the report can be split by operating system, device type, country, and month.
Wikipedia’s search market report
Wikipedia also recently published their search engine traffic data using a similar methodology. Every day Wikipedia counts link clicks from search engines and aggregates them into the search market share dashboard (also using direct referral data in a private manner).
We recommend Wikipedia’s data for more granular insights because their dashboard can be split in more ways, including by language, operating system, device type, and country, down to the day.
However, we recommend Cloudflare’s data to support higher-impact decisions because Wikipedia is just one site, whereas Cloudflare is based on millions of sites. While Wikipedia’s data is dependent on to what extent search engines include Wikipedia in their knowledge panels and in their search engine results, Cloudflare’s sample is so large that per-site effects are minimized.
In fact, we now believe Cloudflare’s report is by far the most accurate one of all search engine market share reports out there. With it, governments, researchers, and policy makers can better understand the search engine market and the effect of tools like search choice screens.
The search engine and browser you use should be a personal choice, but right now it's often too complicated to switch away from gatekeeper defaults. So in an open letter to the companies, consumer organizations, and regulators with the power to create effective user choice screens, the CEOs of DuckDuckGo and Ecosia, and Qwant's President published a set of common-sense principles to improve this user experience online. This letter coincides with the final adoption of the EU's Digital Markets Act by the European Parliament this week.
Open Letter from DuckDuckGo, Ecosia, and Qwant
Choice screens and effective switching mechanisms are crucial tools that empower users and enable competition in the search engine and browser markets. The European Union (EU) has taken an important first step by adopting the Digital Markets Act (DMA), which includes obligations to implement such tools. However, the effectiveness of the EU’s mandates and related regulatory efforts across the globe will depend on how gatekeepers implement changes to comply with these new rules.
Without strict adherence to both clear rules and principles for fair choice screens and effective switching mechanisms, gatekeeping firms could choose to circumvent their legal obligations. We suggest regulators make clear their enforcement should adhere to the following ten essential principles for fair choice screens and effective switching mechanisms:
Gatekeeping firms should globally roll out fair choice screens and effective switching mechanisms now, using these principles. We are ready to work collaboratively towards this end, honoring the users‘ desire to choose the services they want to use, and not having those choices decided for them by default.
SIGNATORIES
In case you missed it: Find our series of blogs on search choice here.
If you're a Google Chrome user, you might be surprised to learn that you may soon be automatically entered into Google's new tracking and ad targeting methods called Topics and FLEDGE. Topics uses your Chrome browsing history to automatically collect information about your interests to share with other businesses, tracking companies and websites without your knowledge. FLEDGE enables your Chrome browser to target you with ads based on your browsing history. These new methods enable creepy advertising and other content targeting without third-party cookies. While Google is positioning this as more privacy respecting, the simple fact is tracking, targeting, and profiling, still is tracking, targeting, and profiling, no matter what you want to call it.
1. Don't use Google Chrome! Google Topics and FLEDGE will only exsist in Google Chrome. On iOS or Android we suggest you use our DuckDuckGo mobile browser, which offers best-in-class privacy protection by default when searching and browsing. Plus, we recently launched more app features into beta that will better protect your online privacy, like Email Protection and App Tracking Protection for Android. On desktop, we just launched the DuckDuckGo app for Mac into beta (Windows coming soon) so you can skip the Chrome headache completely and use ours by joining our waitlist (which is moving quickly).
2. Install the DuckDuckGo Chrome extension. In response to Google automatically turning on Topics and FLEDGE in Chrome, we've enhanced our Chrome extension to block Topics and FLEDGE interactions on websites, stopping these new forms of targeting. This is in addition to the all-in-one privacy protection that our extension offers, including private search, tracker blocking, Smarter Encryption, and Global Privacy Control. The Topics and FLEDGE blocking addition is included as of version 2022.4.18 which should auto-update, though you can also check the version you have installed from the extensions list within Chrome. For non-Chrome desktop browsers, you can get our extension here.
3. Change your Chrome and Google settings, which we recommend you do regardless if you continue to use Chrome or Google.
Note that even if you change these settings, we also recommend installing the DuckDuckGo Chrome extension to get more privacy protection than possible using Chrome settings alone.
In 2021, Google reluctantly signaled it would follow other browsers to forbid the use of third-party cookies by default, though it recently delayed doing so to at least 2023. Unlike other browsers, however, instead of just dropping third-party cookies, they are trying to replace them with alternative tracking mechanisms that are just as creepy and privacy invasive.
They first implemented a new tracking method in Chrome called Federated Learning of Cohorts (FLoC). FLoC was automatically turned on for millions of Google users who were not even given the chance to opt-out. This was understandably met with widespread criticism from privacy experts. To address the situation, we voiced our concerns and immediately enhanced our tracker blocking so that our Chrome extension would protect you from FLoC.
In response, Google announced it's ending FLoC and replacing it with yet another tracking method called Topics. Like FLoC, Topics will automatically use your browsing history to infer your interests in topics (e.g., “Child Internet Safety”, “Personal Loans”, etc.). While FLoC automatically shared a cohort identifier (for a group of people with correlated interests or demographics) with websites and tracking companies, Topics will automatically share a subset of your inferred interests, which these companies can then use to target ads and content at you.
While some suggest that Topics is a less invasive way of ad targeting, we don't agree. Why not? Fundamentally it’s because, by default, Google Chrome will still be automatically surveilling your online activity and sharing information about you with advertisers and other parties so they can behaviorally target you without your consent. This targeting, regardless of how it's done, enables manipulation (ex. exploiting personal vulnerabilities), discrimination (ex. people not seeing job opportunities based on personal profiles), and filter bubbles (ex. creating echo chambers that can divide people) that many people would like to avoid. Google says that users will be able to go in and delete “Topics” they don’t want shared, but Google knows full well that people rarely change default settings, plus the company routinely puts “dark patterns” in the way of users changing these settings, and is therefore making it needlessly difficult for people to take control over their privacy. Privacy should be the default.
In addition, the implementation of Topics presents a bunch of other privacy problems, including:
You know those ads that seem to follow you around onto every website you visit, long after looking something up online? Known as “re-targeting”, these ads are shown to you based on your browsing history from other websites, stored in third-party cookies. With the planned removal of third-party cookies Google decided to also introduce FLEDGE, a new method of re-targeting that similarly moves Google ad technology directly into the Chrome browser.
When you visit a website where the advertiser may want to later follow you with an ad, the advertiser can tell your Chrome browser to put you into an interest group. Then, when you visit another website which displays ads, your Chrome browser will run an ad auction based on your interest groups and target specific ads at you. So much for your browser working for you!
People are, by and large, vehemently against ad re-targeting and find it invasive and creepy. Because your browsing history is used to target you, just like Topics it opens you up to the same type of manipulation, discrimination, and potential embarrassment from highly personal ads being shown via your browser, and also operates without your consent.
For all of the above reasons and more, DuckDuckGo has enhanced the tracker blocking for our Privacy Essentials Chrome extension to block Google Topics and FLEDGE. This is directly in line with the extension's purpose of protecting your privacy holistically as you use Chrome, without any of the complicated settings. It's privacy, simplified.
DISCLAIMER
* Copyrights belong to each article's respective author.
** Although this page should be free from tracking and other hazards, I can't guarantee that, after you click any links to external websites.
DISCLAIMER
* Copyrights belong to each article's respective author.
** Although this page should be free from tracking and other hazards, I can't guarantee that, after you click any links to external websites.
Google says it has suspended the app for the Chinese e-commerce giant Pinduoduo after malware was found in versions of the software. The move comes just weeks after Chinese security researchers published an analysis suggesting the popular e-commerce app sought to seize total control over affected devices by exploiting multiple security vulnerabilities in a variety of Android-based smartphones.
In November 2022, researchers at Google’s Project Zero warned about active attacks on Samsung mobile phones which chained together three security vulnerabilities that Samsung patched in March 2021, and which would have allowed an app to add or read any files on the device.
Google said it believes the exploit chain for Samsung devices belonged to a “commercial surveillance vendor,” without elaborating further. The highly technical writeup also did not name the malicious app in question.
On Feb. 28, 2023, researchers at the Chinese security firm DarkNavy published a blog post purporting to show evidence that a major Chinese ecommerce company’s app was using this same three-exploit chain to read user data stored by other apps on the affected device, and to make its app nearly impossible to remove.
DarkNavy likewise did not name the app they said was responsible for the attacks. In fact, the researchers took care to redact the name of the app from multiple code screenshots published in their writeup. DarkNavy did not respond to requests for clarification.
“At present, a large number of end users have complained on multiple social platforms,” reads a translated version of the DarkNavy blog post. “The app has problems such as inexplicable installation, privacy leakage, and inability to uninstall.”
Update, March 27, 1:24 p.m. ET: Dan Goodin over at Ars Technica has an important update on this story that indicates the Pinduoduo code was exploiting a zero-day vulnerability in Android — not Samsung. From that piece:
“A preliminary analysis by Lookout found that at least two off-Play versions of Pinduoduo for Android exploited CVE-2023-20963, the tracking number for an Android vulnerability Google patched in updates that became available to end users two weeks ago. This privilege-escalation flaw, which was exploited prior to Google’s disclosure, allowed the app to perform operations with elevated privileges. The app used these privileges to download code from a developer-designated site and run it within a privileged environment.
“The malicious apps represent “a very sophisticated attack for an app-based malware,” Christoph Hebeisen, one of three Lookout researchers who analyzed the file, wrote in an email. “In recent years, exploits have not usually been seen in the context of mass-distributed apps. Given the extremely intrusive nature of such sophisticated app-based malware, this is an important threat mobile users need to protect against.”
On March 3, 2023, a denizen of the now-defunct cybercrime community BreachForums posted a thread which noted that a unique component of the malicious app code highlighted by DarkNavy also was found in the ecommerce application whose name was apparently redacted from the DarkNavy analysis: Pinduoduo.
A Mar. 3, 2023 post on BreachForums, comparing the redacted code from the DarkNavy analysis with the same function in the Pinduoduo app available for download at the time.
On March 4, 2023, e-commerce expert Liu Huafang posted on the Chinese social media network Weibo that Pinduoduo’s app was using security vulnerabilities to gain market share by stealing user data from its competitors. That Weibo post has since been deleted.
On March 7, the newly created Github account Davinci1010 published a technical analysis claiming that until recently Pinduoduo’s source code included a “backdoor,” a hacking term used to describe code that allows an adversary to remotely and secretly connect to a compromised system at will.
That analysis includes links to archived versions of Pinduoduo’s app released before March 5 (version 6.50 and lower), which is when Davinci1010 says a new version of the app removed the malicious code.
Pinduoduo has not yet responded to requests for comment. Pinduoduo parent company PDD Holdings told Reuters Google has not shared details about why it suspended the app.
The company told CNN that it strongly rejects “the speculation and accusation that Pinduoduo app is malicious just from a generic and non-conclusive response from Google,” and said there were “several apps that have been suspended from Google Play at the same time.”
Pinduoduo is among China’s most popular e-commerce platforms, boasting approximately 900 million monthly active users.
Most of the news coverage of Google’s move against Pinduoduo emphasizes that the malware was found in versions of the Pinduoduo app available outside of Google’s app store — Google Play.
“Off-Play versions of this app that have been found to contain malware have been enforced on via Google Play Protect,” a Google spokesperson said in a statement to Reuters, adding that the Play version of the app has been suspended for security concerns.
However, Google Play is not available to consumers in China. As a result, the app will still be available via other mobile app stores catering to the Chinese market — including those operated by Huawei, Oppo, Tencent and VIVO.
Google said its ban did not affect the PDD Holdings app Temu, which is an online shopping platform in the United States. According to The Washington Post, four of the Apple App Store’s 10 most-downloaded free apps are owned by Chinese companies, including Temu and the social media network TikTok.
The Pinduoduo suspension comes as lawmakers in Congress this week are gearing up to grill the CEO of TikTok over national security concerns. TikTok, which is owned by Beijing-based ByteDance, said last month that it now has roughly 150 million monthly active users in the United States.
A new cybersecurity strategy released earlier this month by the Biden administration singled out China as the greatest cyber threat to the U.S. and Western interests. The strategy says China now presents the “broadest, most active, and most persistent threat to both government and private sector networks,” and says China is “the only country with both the intent to reshape the international order and, increasingly, the economic, diplomatic, military, and technological power to do so.”
A new breach involving data from nine million AT&T customers is a fresh reminder that your mobile provider likely collects and shares a great deal of information about where you go and what you do with your mobile device — unless and until you affirmatively opt out of this data collection. Here’s a primer on why you might want to do that, and how.
Image: Shutterstock
Telecommunications giant AT&T disclosed this month that a breach at a marketing vendor exposed certain account information for nine million customers. AT&T said the data exposed did not include sensitive information, such as credit card or Social Security numbers, or account passwords, but was limited to “Customer Proprietary Network Information” (CPNI), such as the number of lines on an account.
Certain questions may be coming to mind right now, like “What the heck is CPNI?” And, ‘If it’s so ‘customer proprietary,’ why is AT&T sharing it with marketers?” Also maybe, “What can I do about it?” Read on for answers to all three questions.
AT&T’s disclosure said the information exposed included customer first name, wireless account number, wireless phone number and email address. In addition, a small percentage of customer records also exposed the rate plan name, past due amounts, monthly payment amounts and minutes used.
CPNI refers to customer-specific “metadata” about the account and account usage, and may include:
-Called phone numbers
-Time of calls
-Length of calls
-Cost and billing of calls
-Service features
-Premium services, such as directory call assistance
According to a succinct CPNI explainer at TechTarget, CPNI is private and protected information that cannot be used for advertising or marketing directly.
“An individual’s CPNI can be shared with other telecommunications providers for network operating reasons,” wrote TechTarget’s Gavin Wright. “So, when the individual first signs up for phone service, this information is automatically shared by the phone provider to partner companies.”
Is your mobile Internet usage covered by CPNI laws? That’s less clear, as the CPNI rules were established before mobile phones and wireless Internet access were common. TechTarget’s CPNI primer explains:
“Under current U.S. law, cellphone use is only protected as CPNI when it is being used as a telephone. During this time, the company is acting as a telecommunications provider requiring CPNI rules. Internet use, websites visited, search history or apps used are not protected CPNI because the company is acting as an information services provider not subject to these laws.”
Hence, the carriers can share and sell this data because they’re not explicitly prohibited from doing so. All three major carriers say they take steps to anonymize the customer data they share, but researchers have shown it is not terribly difficult to de-anonymize supposedly anonymous web-browsing data.
“Your phone, and consequently your mobile provider, know a lot about you,” wrote Jack Morse for Mashable. “The places you go, apps you use, and the websites you visit potentially reveal all kinds of private information — e.g. religious beliefs, health conditions, travel plans, income level, and specific tastes in pornography. This should bother you.”
Happily, all of the U.S. carriers are required to offer customers ways to opt out of having data about how they use their devices shared with marketers. Here’s a look at some of the carrier-specific practices and opt-out options.
AT&T’s policy says it shares device or “ad ID”, combined with demographics including age range, gender, and ZIP code information with third parties which explicitly include advertisers, programmers, and networks, social media networks, analytics firms, ad networks and other similar companies that are involved in creating and delivering advertisements.
AT&T said the data exposed on 9 million customers was several years old, and mostly related to device upgrade eligibility. This may sound like the data went to just one of its partners who experienced a breach, but in all likelihood it also went to hundreds of AT&T’s partners.
AT&T’s CPNI opt-out page says it shares CPNI data with several of its affiliates, including WarnerMedia, DirecTV and Cricket Wireless. Until recently, AT&T also shared CPNI data with Xandr, whose privacy policy in turn explains that it shares data with hundreds of other advertising firms. Microsoft bought Xandr from AT&T last year.
According to the Electronic Privacy Information Center (EPIC), T-Mobile seems to be the only company out of the big three to extend to all customers the rights conferred by the California Consumer Privacy Act (CCPA).
EPIC says T-Mobile customer data sold to third parties uses another unique identifier called mobile advertising IDs or “MAIDs.” T-Mobile claims that MAIDs don’t directly identify consumers, but under the CCPA MAIDs are considered “personal information” that can be connected to IP addresses, mobile apps installed or used with the device, any video or content viewing information, and device activity and attributes.
T-Mobile customers can opt out by logging into their account and navigating to the profile page, then to “Privacy and Notifications.” From there, toggle off the options for “Use my data for analytics and reporting” and “Use my data to make ads more relevant to me.”
Verizon’s privacy policy says it does not sell information that personally identities customers (e.g., name, telephone number or email address), but it does allow third-party advertising companies to collect information about activity on Verizon websites and in Verizon apps, through MAIDs, pixels, web beacons and social network plugins.
According to Wired.com’s tutorial, Verizon users can opt out by logging into their Verizon account through a web browser or the My Verizon mobile app. From there, select the Account tab, then click Account Settings and Privacy Settings on the web. For the mobile app, click the gear icon in the upper right corner and then Manage Privacy Settings.
On the privacy preferences page, web users can choose “Don’t use” under the Custom Experience section. On the My Verizon app, toggle any green sliders to the left.
EPIC notes that all three major carriers say resetting the consumer’s device ID and/or clearing cookies in the browser will similarly reset any opt-out preferences (i.e., the customer will need to opt out again), and that blocking cookies by default may also block the opt-out cookie from being set.
T-Mobile says its opt out is device-specific and/or browser-specific. “In most cases, your opt-out choice will apply only to the specific device or browser on which it was made. You may need to separately opt out from your other devices and browsers.”
Both AT&T and Verizon offer opt-in programs that gather and share far more information, including device location, the phone numbers you call, and which sites you visit using your mobile and/or home Internet connection. AT&T calls this their Enhanced Relevant Advertising Program; Verizon’s is called Custom Experience Plus.
In 2021, multiple media outlets reported that some Verizon customers were being automatically enrolled in Custom Experience Plus — even after those customers had already opted out of the same program under its previous name — “Verizon Selects.”
If none of the above opt out options work for you, at a minimum you should be able to opt out of CPNI sharing by calling your carrier, or by visiting one of their stores.
Why should you opt out of sharing CPNI data? For starters, some of the nation’s largest wireless carriers don’t have a great track record in terms of protecting the sensitive information that you give them solely for the purposes of becoming a customer — let alone the information they collect about your use of their services after that point.
In January 2023, T-Mobile disclosed that someone stole data on 37 million customer accounts, including customer name, billing address, email, phone number, date of birth, T-Mobile account number and plan details. In August 2021, T-Mobile acknowledged that hackers made off with the names, dates of birth, Social Security numbers and driver’s license/ID information on more than 40 million current, former or prospective customers who applied for credit with the company.
Last summer, a cybercriminal began selling the names, email addresses, phone numbers, SSNs and dates of birth on 23 million Americans. An exhaustive analysis of the data strongly suggested it all belonged to customers of one AT&T company or another. AT&T stopped short of saying the data wasn’t theirs, but said the records did not appear to have come from its systems and may be tied to a previous data incident at another company.
However frequently the carriers may alert consumers about CPNI breaches, it’s probably nowhere near often enough. Currently, the carriers are required to report a consumer CPNI breach only in cases “when a person, without authorization or exceeding authorization, has intentionally gained access to, used or disclosed CPNI.”
But that definition of breach was crafted eons ago, back when the primary way CPNI was exposed was through “pretexting,” such when the phone company’s employees are tricked into giving away protected customer data.
In January, regulators at the U.S. Federal Communications Commission (FCC) proposed amending the definition of “breach” to include things like inadvertent disclosure — such as when companies expose CPNI data on a poorly-secured server in the cloud. The FCC is accepting public comments on the matter until March 24, 2023.
While it’s true that the leak of CPNI data does not involve sensitive information like Social Security or credit card numbers, one thing AT&T’s breach notice doesn’t mention is that CPNI data — such as balances and payments made — can be abused by fraudsters to make scam emails and text messages more believable when they’re trying to impersonate AT&T and phish AT&T customers.
The other problem with letting companies share or sell your CPNI data is that the wireless carriers can change their privacy policies at any time, and you are assumed to be okay with those changes as long as you keep using their services.
For example, location data from your wireless device is most definitely CPNI, and yet until very recently all of the major carriers sold their customers’ real-time location data to third party data brokers without customer consent.
What was their punishment? In 2020, the FCC proposed fines totaling $208 million against all of the major carriers for selling their customers’ real-time location data. If that sounds like a lot of money, consider that all of the major wireless providers reported tens of billions of dollars in revenue last year (e.g., Verizon’s consumer revenue alone was more than $100 billion last year).
If the United States had federal privacy laws that were at all consumer-friendly and relevant to today’s digital economy, this kind of data collection and sharing would always be opt-in by default. In such a world, the enormously profitable wireless industry would likely be forced to offer clear financial incentives to customers who choose to share this information.
But until that day arrives, understand that the carriers can change their data collection and sharing policies when it suits them. And regardless of whether you actually read any notices about changes to their privacy policies, you will have agreed to those changes as long as you continue using their service.
The U.S. Federal Bureau of Investigation (FBI) this week arrested a New York man on suspicion of running BreachForums, a popular English-language cybercrime forum where some of the world biggest hacked databases routinely show up for sale. The forum’s administrator “Pompompurin” has been a thorn in the side of the FBI for years, and BreachForums is widely considered a reincarnation of RaidForums, a remarkably similar crime forum that the FBI infiltrated and dismantled in 2022.
Federal agents carting items out of Fitzpatrick’s home on March 15. Image: News 12 Westchester.
In an affidavit filed with the District Court for the Southern District of New York, FBI Special Agent John Longmire said that at around 4:30 p.m. on March 15, 2023, he led a team of law enforcement agents that made a probable cause arrest of a Conor Brian Fitzpatrick in Peekskill, NY.
“When I arrested the defendant on March 15, 2023, he stated to me in substance and in part that: a) his name was Conor Brian Fitzpatrick; b) he used the alias ‘pompompurin/’ and c) he was the owner and administrator of ‘BreachForums’ the data breach website referenced in the Complaint,” Longmire wrote.
Pompompurin has been something of a nemesis to the FBI for several years. In November 2021, KrebsOnSecurity broke the news that thousands of fake emails about a cybercrime investigation were blasted out from the FBI’s email systems and Internet addresses.
Pompompurin took credit for that stunt, and said he was able to send the FBI email blast by exploiting a flaw in an FBI portal designed to share information with state and local law enforcement authorities. The FBI later acknowledged that a software misconfiguration allowed someone to send the fake emails.
In December, 2022, KrebsOnSecurity broke the news that hackers active on BreachForums had infiltrated the FBI’s InfraGard program, a vetted FBI program designed to build cyber and physical threat information sharing partnerships with experts in the private sector. The hackers impersonated the CEO of a major financial company, applied for InfraGard membership in the CEO’s name, and were granted admission to the community.
From there, the hackers plundered the InfraGard member database, and proceeded to sell contact information on more than 80,000 InfraGard members in an auction on BreachForums. The FBI responded by disabling the portal for some time, before ultimately forcing all InfraGard members to re-apply for membership.
More recently, BreachForums was the sales forum for data stolen from DC Health Link, a health insurance exchange based in Washington, D.C. that suffered a data breach this month. The sales thread initially said the data included the names, Social Security numbers, dates of birth, health plan and enrollee information and more on 170,000 individuals, although the official notice about the breach says 56,415 people were affected.
In April 2022, U.S. Justice Department seized the servers and domains for RaidForums, an extremely popular English-language cybercrime forum that sold access to more than 10 billion consumer records stolen in some of the world’s largest data breaches since 2015. As part of that operation, the feds also charged the alleged administrator, 21-year-old Diogo Santos Coelho of Portugal, with six criminal counts.
Coelho was arrested in the United Kingdom on Jan. 31, 2022. By that time, the new BreachForums had been live for just under a week, but with a familiar look.
BreachForums remains accessible online, and from reviewing the live chat stream on the site’s home page it appears the forum’s active users are only just becoming aware that their administrator — and the site’s database — is likely now in FBI hands:
“Wait if they arrested pom then doesn’t the FBI have all of our details we’ve registered with?” asked one worried BreachForums member.
“But we all have good VPNs I guess, right…right guys?” another denizen offered.
“Like pom would most likely do a plea bargain and cooperate with the feds as much as possible,” replied another.
Fitzpatrick could not be immediately reached for comment. The FBI declined to comment for this story.
There is only one page to the criminal complaint against Fitzpatrick (PDF), which charges him with one count of conspiracy to commit access device fraud. The affidavit on his arrest is available here (PDF).
Update: Corrected spelling of FBI agent’s last name.
Microsoft on Tuesday released updates to quash at least 74 security bugs in its Windows operating systems and software. Two of those flaws are already being actively attacked, including an especially severe weakness in Microsoft Outlook that can be exploited without any user interaction.
The Outlook vulnerability (CVE-2023-23397) affects all versions of Microsoft Outlook from 2013 to the newest. Microsoft said it has seen evidence that attackers are exploiting this flaw, which can be done without any user interaction by sending a booby-trapped email that triggers automatically when retrieved by the email server — before the email is even viewed in the Preview Pane.
While CVE-2023-23397 is labeled as an “Elevation of Privilege” vulnerability, that label doesn’t accurately reflect its severity, said Kevin Breen, director of cyber threat research at Immersive Labs.
Known as an NTLM relay attack, it allows an attacker to get someone’s NTLM hash [Windows account password] and use it in an attack commonly referred to as “Pass The Hash.”
“The vulnerability effectively lets the attacker authenticate as a trusted individual without having to know the person’s password,” Breen said. “This is on par with an attacker having a valid password with access to an organization’s systems.”
Security firm Rapid7 points out that this bug affects self-hosted versions of Outlook like Microsoft 365 Apps for Enterprise, but Microsoft-hosted online services like Microsoft 365 are not vulnerable.
The other zero-day flaw being actively exploited in the wild — CVE-2023-24880 — is a “Security Feature Bypass” in Windows SmartScreen, part of Microsoft’s slate of endpoint protection tools.
Patch management vendor Action1 notes that the exploit for this bug is low in complexity and requires no special privileges. But it does require some user interaction, and can’t be used to gain access to private information or privileges. However, the flaw can allow other malicious code to run without being detected by SmartScreen reputation checks.
Dustin Childs, head of threat awareness at Trend Micro’s Zero Day Initiative, said CVE-2023-24880 allows attackers to create files that would bypass Mark of the Web (MOTW) defenses.
“Protective measures like SmartScreen and Protected View in Microsoft Office rely on MOTW, so bypassing these makes it easier for threat actors to spread malware via crafted documents and other infected files that would otherwise be stopped by SmartScreen,” Childs said.
Seven other vulnerabilities Microsoft patched this week earned its most-dire “critical” severity label, meaning the updates address security holes that could be exploited to give the attacker full, remote control over a Windows host with little or no interaction from the user.
Also this week, Adobe released eight patches addressing a whopping 105 security holes across a variety of products, including Adobe Photoshop, Cold Fusion, Experience Manager, Dimension, Commerce, Magento, Substance 3D Stager, Cloud Desktop Application, and Illustrator.
For a more granular rundown on the updates released today, see the SANS Internet Storm Center roundup. If today’s updates cause any stability or usability issues in Windows, AskWoody.com will likely have the lowdown on that.
Please consider backing up your data and/or imaging your system before applying any updates. And feel free to sound off in the comments if you experience any problems as a result of these patches.
Two U.S. men have been charged with hacking into a U.S. Drug Enforcement Agency (DEA) online portal that taps into 16 different federal law enforcement databases. Both are alleged to be part of a larger criminal organization that specializes in using fake emergency data requests from compromised police and government email accounts to publicly threaten and extort their victims.
Prosecutors for the Eastern District of New York today unsealed criminal complaints against Sagar Steven Singh — a.k.a “Weep” — a 19-year-old from Pawtucket, Rhode Island; and Nicholas Ceraolo, 25, of Queens, NY, who allegedly went by the handles “Convict” and “Ominus.”
The Justice Department says Singh and Ceraolo belong to a group of cybercriminals known to its members as “ViLE,” who specialize in obtaining personal information about third-party victims, which they then use to harass, threaten or extort the victims, a practice known as “doxing.”
“ViLE is collaborative, and the members routinely share tactics and illicitly obtained information with each other,” prosecutors charged.
The government alleges the defendants and other members of ViLE use various methods to obtain victims’ personal information, including:
-tricking customer service employees;
-submitting fraudulent legal process to social media companies to elicit users’ registration information;
-co-opting and corrupting corporate insiders;
-searching public and private online databases;
-accessing a nonpublic United States government database without authorization
-unlawfully using official email accounts belonging to other countries.
The complaint says once they obtained a victim’s information, Singh and Ceraolo would post the information in an online forum. The government refers to this community only as “Forum-1,” saying that it is administered by the leader of ViLE (referenced in the complaint as “CC-1”).
“Victims are extorted into paying CC-1 to have their information removed from Forum-1,” prosecutors allege. “Singh also uses the threat of revealing personal information to extort victims into giving him access to their social media accounts, which Singh then resells.”
Sources tell KrebsOnSecurity in addition to being members of ViLE, both Weep and Ominous are or were staff members for Doxbin, a highly toxic online community that provides a forum for digging up personal information on people and posting it publicly. This is supported by the Doxbin administrator’s claimed responsibility for a high-profile intrusion at the DEA’s law enforcement data sharing portal last year.
A screenshot of alleged access to the Drug Enforcement Agency’s intelligence sharing portal, shared by “KT,” the current administrator of the doxing and harassment community Doxbin.
The government alleges that on May 7, 2022, Singh used stolen credentials to log into a U.S. federal government portal without authorization. The complaint doesn’t specify which agency portal was hacked, but it does state that the portal included access to law enforcement databases that track narcotics seizures in the United States.
On May 12, 2022, KrebsOnSecurity broke the news that hackers had gained access to a DEA portal that taps into 16 different federal law enforcement databases. As reported at the time, the inside scoop on how that hack went down came from KT, the current administrator of the Doxbin and the individual referenced in the government’s complaint as “CC-1.”
Indeed, a screenshot of the ViLE group website includes the group’s official roster, which lists KT at the top, followed by Weep and Ominus.
A screenshot of the website for the cybercriminal group “ViLE.” Image: USDOJ.
In March 2022, KrebsOnSecurity warned that multiple cybercrime groups were finding success with fraudulent Emergency Data Requests (EDRs), wherein the hackers use compromised police and government email accounts to file warrantless data requests with social media firms and mobile telephony providers, attesting that the information being requested can’t wait for a warrant because it relates to an urgent matter of life and death.
That story showed that the previous owner of the Doxbin also was part of a teenage hacking group that specialized in offering fake EDRs as a service on the dark web.
Prosecutors say they tied Singh to the government portal hack because he connected to it from an Internet address that he’d previously used to access a social media account registered in his name. When they raided Singh’s residence on Sept. 8, 2022 and seized his devices, investigators with Homeland Security found a cellular phone and laptop that allegedly “contained extensive evidence of access to the Portal.”
The complaint alleges that between February 2022 and May 2022, Ceraolo used an official email account belonging to a Bangladeshi police official to pose as a police officer in communication with U.S.-based social media platforms.
“In these communications, Ceraolo requested personal information about users of these platforms, under the false pretense that the users were committing crimes or in life-threatening danger,” the complaint states.
For example, on or about March 13, 2022, Ceraolo allegedly used the Bangladeshi police email account to falsely claim that the target of the EDR had sent bomb threats, distributed child pornography and threatened officials of the Bangladeshi government.
On or about May 9, 2022, the government says, Singh sent a friend screenshots of text messages between himself and someone he had doxed on the Doxbin and was trying to extort for their Instagram handle. The data included the victim’s Social Security number, driver’s license number, cellphone number, and home address.
“Look familiar?” Singh allegedly wrote to the victim. “You’re gonna comply to me if you don’t want anything negative to happen to your parents. . . I have every detail involving your parents . . . allowing me to do whatever I desire to them in malicious ways.”
Neither of the defendants could be immediately reached for comment. KT, the current administrator of Doxbin, declined a request for comment on the charges.
Ceraolo is a self-described security researcher who has been credited in many news stories over the years with discovering security vulnerabilities at AT&T, T-Mobile, Comcast and Cox Communications.
Ceraolo’s stated partner in most of these discoveries — a 30-year-old Connecticut man named Ryan “Phobia” Stevenson — was charged in 2019 with being part of a group that stole millions of dollars worth of cryptocurrencies via SIM-swapping, a crime that involves tricking a mobile provider into routing a target’s calls and text messages to another device.
In 2018, KrebsOnSecurity detailed how Stevenson earned bug bounty rewards and public recognition from top telecom companies for finding and reporting security holes in their websites, all the while secretly peddling those same vulnerabilities to cybercriminals.
According to the Justice Department, if convicted Ceraolo faces up to 20 years’ imprisonment for conspiracy to commit wire fraud; both Ceraolo and Singh face five years’ imprisonment for conspiracy to commit computer intrusions.
A copy of the complaint against Ceraolo and Singh is here (PDF).
A Croatian national has been arrested for allegedly operating NetWire, a Remote Access Trojan (RAT) marketed on cybercrime forums since 2012 as a stealthy way to spy on infected systems and siphon passwords. The arrest coincided with a seizure of the NetWire sales website by the U.S. Federal Bureau of Investigation (FBI). While the defendant in this case hasn’t yet been named publicly, the NetWire website has been leaking information about the likely true identity and location of its owner for the past 11 years.
Typically installed by booby-trapped Microsoft Office documents and distributed via email, NetWire is a multi-platform threat that is capable of targeting not only Microsoft Windows machines but also Android, Linux and Mac systems.
NetWire’s reliability and relatively low cost ($80-$140 depending on features) has made it an extremely popular RAT on the cybercrime forums for years, and NetWire infections consistently rank among the top 10 most active RATs in use.
NetWire has been sold openly on the same website since 2012: worldwiredlabs[.]com. That website now features a seizure notice from the U.S. Department of Justice (DOJ), which says the domain was taken as part of “a coordinated law enforcement action taken against the NetWire Remote Access Trojan.”
“As part of this week’s law enforcement action, authorities in Croatia on Tuesday arrested a Croatian national who allegedly was the administrator of the website,” reads a statement by the DOJ today. “This defendant will be prosecuted by Croatian authorities. Additionally, law enforcement in Switzerland on Tuesday seized the computer server hosting the NetWire RAT infrastructure.”
Neither the DOJ’s statement nor a press release on the operation published by Croatian authorities mentioned the name of the accused. But it’s fairly remarkable that it has taken so long for authorities in the United States and elsewhere to move against NetWire and its alleged proprietor, given that the RAT’s author apparently did very little to hide his real-life identity.
The WorldWiredLabs website first came online in February 2012 using a dedicated host with no other domains. The site’s true WHOIS registration records have always been hidden by privacy protection services, but there are plenty of clues in historical Domain Name System (DNS) records for WorldWiredLabs that point in the same direction.
In October 2012, the WorldWiredLabs domain moved to another dedicated server at the Internet address 198.91.90.7, which was home to just one other domain: printschoolmedia[.]org, also registered in 2012.
According to DomainTools.com, printschoolmedia[.]org was registered to a Mario Zanko in Zapresic, Croatia, and to the email address zankomario@gmail.com. DomainTools further shows this email address was used to register one other domain in 2012: wwlabshosting[.]com, also registered to Mario Zanko from Croatia.
A review of DNS records for both printschoolmedia[.]org and wwlabshosting[.]com shows that while these domains were online they both used the DNS name server ns1.worldwiredlabs[.]com. No other domains have been recorded using that same name server.
The WorldWiredLabs website, in 2013. Source: Archive.org.
DNS records for worldwiredlabs[.]com also show the site forwarded incoming email to the address tommaloney@ruggedinbox.com. Constella Intelligence, a service that indexes information exposed by public database leaks, shows this email address was used to register an account at the clothing retailer romwe.com, using the password “123456xx.”
Running a reverse search on this password in Constella Intelligence shows there are more than 450 email addresses known to have used this credential, and two of those are zankomario@gmail.com and zankomario@yahoo.com.
A search on zankomario@gmail.com in Skype returns three results, including the account name “Netwire” and the username “Dugidox,” and another for a Mario Zanko (username zanko.mario).
Dugidox corresponds to the hacker handle most frequently associated with NetWire sales and support discussion threads on multiple cybercrime forums over the years.
Constella ties dugidox@gmail.com to a number of website registrations, including the Dugidox handle on BlackHatWorld and HackForums, and to IP addresses in Croatia for both. Constella also shows the email address zankomario@gmail.com used the password “dugidox2407.”
In 2010, someone using the email address dugidox@gmail.com registered the domain dugidox[.]com. The WHOIS registration records for that domain list a “Senela Eanko” as the registrant, but the address used was the same street address in Zapresic that appears in the WHOIS records for printschoolmedia[.]org, which is registered in Mr. Zanco’s name.
Prior to the demise of Google+, the email address dugidox@gmail.com mapped to an account with the nickname “Netwire wwl.” The dugidox email also was tied to a Facebook account (mario.zanko3), which featured check-ins and photos from various places in Croatia.
That Facebook profile is no longer active, but back in January 2017, the administrator of WorldWiredLabs posted that he was considering adding certain Android mobile functionality to his service. Three days after that, the Mario.Zank3 profile posted a photo saying he was selected for an Android instruction course — with his dugidox email in the photo, naturally.
Incorporation records from the U.K.’s Companies House show that in 2017 Mr. Zanko became an officer in a company called Godbex Solutions LTD. A Youtube video invoking this corporate name describes Godbex as a “next generation platform” for exchanging gold and cryptocurrencies.
The U.K. Companies House records show Godbex was dissolved in 2020. It also says Mr. Zanko was born in July 1983, and lists his occupation as “electrical engineer.”
Mr. Zanko did not respond to multiple requests for comment.
A statement from the Croatian police about the NetWire takedown is here.
The domain name registrar Freenom, whose free domain names have long been a draw for spammers and phishers, has stopped allowing new domain name registrations. The move comes after the Dutch registrar was sued by Meta, which alleges the company ignores abuse complaints about phishing websites while monetizing traffic to those abusive domains.
Freenom is the domain name registry service provider for five so-called “country code top level domains” (ccTLDs), including .cf for the Central African Republic; .ga for Gabon; .gq for Equatorial Guinea; .ml for Mali; and .tk for Tokelau.
Freenom has always waived the registration fees for domains in these country-code domains, presumably as a way to encourage users to pay for related services, such as registering a .com or .net domain, for which Freenom does charge a fee.
On March 3, 2023, social media giant Meta sued Freenom in a Northern California court, alleging cybersquatting violations and trademark infringement. The lawsuit also seeks information about the identities of 20 different “John Does” — Freenom customers that Meta says have been particularly active in phishing attacks against Facebook, Instagram, and WhatsApp users.
The lawsuit points to a 2021 study (PDF) on the abuse of domains conducted by Interisle Consulting Group, which discovered that those ccTLDs operated by Freenom made up five of the Top Ten TLDs most abused by phishers.
“The five ccTLDs to which Freenom provides its services are the TLDs of choice for cybercriminals because Freenom provides free domain name registration services and shields its customers’ identity, even after being presented with evidence that the domain names are being used for illegal purposes,” the complaint charges. “Even after receiving notices of infringement or phishing by its customers, Freenom continues to license new infringing domain names to those same customers.”
Meta further alleges that “Freenom has repeatedly failed to take appropriate steps to investigate and respond appropriately to reports of abuse,” and that it monetizes the traffic from infringing domains by reselling them and by adding “parking pages” that redirect visitors to other commercial websites, websites with pornographic content, and websites used for malicious activity like phishing.
Freenom has not yet responded to requests for comment. But attempts to register a domain through the company’s website as of publication time generated an error message that reads:
“Because of technical issues the Freenom application for new registrations is temporarily out-of-order. Please accept our apologies for the inconvenience. We are working on a solution and hope to resume operations shortly. Thank you for your understanding.”
Image: Interisle Consulting Group, Phishing Landscape 2021, Sept. 2021.
Although Freenom is based in The Netherlands, some of its other sister companies named as defendants in the lawsuit are incorporated in the United States.
Meta initially filed this lawsuit in December 2022, but it asked the court to seal the case, which would have restricted public access to court documents in the dispute. That request was denied, and Meta amended and re-filed the lawsuit last week.
According to Meta, this isn’t just a case of another domain name registrar ignoring abuse complaints because it’s bad for business. The lawsuit alleges that the owners of Freenom “are part of a web of companies created to facilitate cybersquatting, all for the benefit of Freenom.”
“On information and belief, one or more of the ccTLD Service Providers, ID Shield, Yoursafe, Freedom Registry, Fintag, Cervesia, VTL, Joost Zuurbier Management Services B.V., and Doe Defendants were created to hide assets, ensure unlawful activity including cybersquatting and phishing goes undetected, and to further the goals of Freenom,” Meta charged.
It remains unclear why Freenom has stopped allowing domain registration. In June 2015, ICANN suspended Freenom’s ability to create new domain names or initiate inbound transfers of domain names for 90 days. According to Meta, the suspension was premised on ICANN’s determination that Freenom “has engaged in a pattern and practice of trafficking in or use of domain names identical or confusingly similar to a trademark or service mark of a third party in which the Registered Name Holder has no rights or legitimate interest.”
A spokesperson for ICANN said the organization has no insight as to why Freenom might have stopped registering domain names. But it said Freenom (d/b/a OpenTLD B.V.) also received formal enforcement notices from ICANN in 2017 and 2020 for violating different obligations.
A copy of the amended complaint against Freenom, et. al, is available here (PDF).
March 8, 6:11 p.m. ET: Updated story with response from ICANN. Corrected attribution of the domain abuse report.
The Biden administration today issued its vision for beefing up the nation’s collective cybersecurity posture, including calls for legislation establishing liability for software products and services that are sold with little regard for security. The White House’s new national cybersecurity strategy also envisions a more active role by cloud providers and the U.S. military in disrupting cybercriminal infrastructure, and it names China as the single biggest cyber threat to U.S. interests.
The strategy says the White House will work with Congress and the private sector to develop legislation that would prevent companies from disavowing responsibility for the security of their software products or services.
Coupled with this stick would be a carrot: An as-yet-undefined “safe harbor framework” that would lay out what these companies could do to demonstrate that they are making cybersecurity a central concern of their design and operations.
“Any such legislation should prevent manufacturers and software publishers with market power from fully disclaiming liability by contract, and establish higher standards of care for software in specific high-risk scenarios,” the strategy explains. “To begin to shape standards of care for secure software development, the Administration will drive the development of an adaptable safe harbor framework to shield from liability companies that securely develop and maintain their software products and services.”
Brian Fox, chief technology officer and founder of the software supply chain security firm Sonatype, called the software liability push a landmark moment for the industry.
“Market forces are leading to a race to the bottom in certain industries, while contract law allows software vendors of all kinds to shield themselves from liability,” Fox said. “Regulations for other industries went through a similar transformation, and we saw a positive result — there’s now an expectation of appropriate due care, and accountability for those who fail to comply. Establishing the concept of safe harbors allows the industry to mature incrementally, leveling up security best practices in order to retain a liability shield, versus calling for sweeping reform and unrealistic outcomes as previous regulatory attempts have.”
In 2012 (approximately three national cyber strategies ago), then director of the U.S. National Security Agency (NSA) Keith Alexander made headlines when he remarked that years of successful cyber espionage campaigns from Chinese state-sponsored hackers represented “the greatest transfer of wealth in history.”
The document released today says the People’s Republic of China (PRC) “now presents the broadest, most active, and most persistent threat to both government and private sector networks,” and says China is “the only country with both the intent to reshape the international order and, increasingly, the economic, diplomatic, military, and technological power to do so.”
Many of the U.S. government’s efforts to restrain China’s technology prowess involve ongoing initiatives like the CHIPS Act, a new law signed by President Biden last year that sets aside more than $50 billion to expand U.S.-based semiconductor manufacturing and research and to make the U.S. less dependent on foreign suppliers; the National Artificial Intelligence Initiative; and the National Strategy to Secure 5G.
As the maker of most consumer gizmos with a computer chip inside, China is also the source of an incredible number of low-cost Internet of Things (IoT) devices that are not only poorly secured, but are probably more accurately described as insecure by design.
The Biden administration said it would continue its previously announced plans to develop a system of labeling that could be applied to various IoT products and give consumers some idea of how secure the products may be. But it remains unclear how those labels might apply to products made by companies outside of the United States.
One could convincingly make the case that the world has witnessed yet another historic transfer of wealth and trade secrets over the past decade — in the form of ransomware and data ransom attacks by Russia-based cybercriminal syndicates, as well as Russian intelligence agency operations like the U.S. government-wide Solar Winds compromise.
On the ransomware front, the White House strategy seems to focus heavily on building the capability to disrupt the digital infrastructure used by adversaries that are threatening vital U.S. cyber interests. The document points to the 2021 takedown of the Emotet botnet — a cybercrime machine that was heavily used by multiple Russian ransomware groups — as a model for this activity, but says those disruptive operations need to happen faster and more often.
To that end, the Biden administration says it will expand the capacity of the National Cyber Investigative Joint Task Force (NCIJTF), the primary federal agency for coordinating cyber threat investigations across law enforcement agencies, the intelligence community, and the Department of Defense.
“To increase the volume and speed of these integrated disruption campaigns, the Federal Government must further develop technological and organizational platforms that enable continuous, coordinated operations,” the strategy observes. “The NCIJTF will expand its capacity to coordinate takedown and disruption campaigns with greater speed, scale, and frequency. Similarly, DoD and the Intelligence Community are committed to bringing to bear their full range of complementary authorities to disruption campaigns.”
The strategy anticipates the U.S. government working more closely with cloud and other Internet infrastructure providers to quickly identify malicious use of U.S.-based infrastructure, share reports of malicious use with the government, and make it easier for victims to report abuse of these systems.
“Given the interest of the cybersecurity community and digital infrastructure owners and operators in continuing this approach, we must sustain and expand upon this model so that collaborative disruption operations can be carried out on a continuous basis,” the strategy argues. “Threat specific collaboration should take the form of nimble, temporary cells, comprised of a small number of trusted operators, hosted and supported by a relevant hub. Using virtual collaboration platforms, members of the cell would share information bidirectionally and work rapidly to disrupt adversaries.”
But here, again, there is a carrot-and-stick approach: The administration said it is taking steps to implement Executive Order (EO) 13984 –issued by the Trump administration in January 2021 — which requires cloud providers to verify the identity of foreign persons using their services.
“All service providers must make reasonable attempts to secure the use of their infrastructure against abuse or other criminal behavior,” the strategy states. “The Administration will prioritize adoption and enforcement of a risk-based approach to cybersecurity across Infrastructure-as-a-Service providers that addresses known methods and indicators of malicious activity including through implementation of EO 13984.”
Ted Schlein, founding partner of the cybersecurity venture capital firm Ballistic Ventures, said how this gets implemented will determine whether it can be effective.
“Adversaries know the NSA, which is the elite portion of the nation’s cyber defense, cannot monitor U.S.-based infrastructure, so they just use U.S.-based cloud infrastructure to perpetrate their attacks,” Schlein said. “We have to fix this. I believe some of this section is a bit pollyannaish, as it assumes a bad actor with a desire to do a bad thing will self-identify themselves, as the major recommendation here is around KYC (‘know your customer’).”
One brief but interesting section of the strategy titled “Explore a Federal Cyber Insurance Backdrop” contemplates the government’s liability and response to a too-big-to-fail scenario or “catastrophic cyber incident.”
“We will explore how the government can stabilize insurance markets against catastrophic risk to drive better cybersecurity practices and to provide market certainty when catastrophic events do occur,” the strategy reads.
When the Bush administration released the first U.S. national cybersecurity strategy 20 years ago after the 9/11 attacks, the popular term for that same scenario was a “digital Pearl Harbor,” and there was a great deal of talk then about how the cyber insurance market would soon help companies shore up their cybersecurity practices.
In the wake of countless ransomware intrusions, many companies now hold cybersecurity insurance to help cover the considerable costs of responding to such intrusions. Leaving aside the question of whether insurance coverage has helped companies improve security, what happens if every one of these companies has to make a claim at the same time?
The notion of a Digital Pearl Harbor incident struck many experts at the time as a hyperbolic justification for expanding the government’s digital surveillance capabilities, and an overstatement of the capabilities of our adversaries. But back in 2003, most of the world’s companies didn’t host their entire business in the cloud.
Today, nobody questions the capabilities, goals and outcomes of dozens of nation-state level cyber adversaries. And these days, a catastrophic cyber incident could be little more than an extended, simultaneous outage at multiple cloud providers.
The full national cybersecurity strategy is available from the White House website (PDF).
Image: Shutterstock.com
Three different cybercriminal groups claimed access to internal networks at communications giant T-Mobile in more than 100 separate incidents throughout 2022, new data suggests. In each case, the goal of the attackers was the same: Phish T-Mobile employees for access to internal company tools, and then convert that access into a cybercrime service that could be hired to divert any T-Mobile user’s text messages and phone calls to another device.
The conclusions above are based on an extensive analysis of Telegram chat logs from three distinct cybercrime groups or actors that have been identified by security researchers as particularly active in and effective at “SIM-swapping,” which involves temporarily seizing control over a target’s mobile phone number.
Countless websites and online services use SMS text messages for both password resets and multi-factor authentication. This means that stealing someone’s phone number often can let cybercriminals hijack the target’s entire digital life in short order — including access to any financial, email and social media accounts tied to that phone number.
All three SIM-swapping entities that were tracked for this story remain active in 2023, and they all conduct business in open channels on the instant messaging platform Telegram. KrebsOnSecurity is not naming those channels or groups here because they will simply migrate to more private servers if exposed publicly, and for now those servers remain a useful source of intelligence about their activities.
Each advertises their claimed access to T-Mobile systems in a similar way. At a minimum, every SIM-swapping opportunity is announced with a brief “Tmobile up!” or “Tmo up!” message to channel participants. Other information in the announcements includes the price for a single SIM-swap request, and the handle of the person who takes the payment and information about the targeted subscriber.
The information required from the customer of the SIM-swapping service includes the target’s phone number, and the serial number tied to the new SIM card that will be used to receive text messages and phone calls from the hijacked phone number.
Initially, the goal of this project was to count how many times each entity claimed access to T-Mobile throughout 2022, by cataloging the various “Tmo up!” posts from each day and working backwards from Dec. 31, 2022.
But by the time we got to claims made in the middle of May 2022, completing the rest of the year’s timeline seemed unnecessary. The tally shows that in the last seven-and-a-half months of 2022, these groups collectively made SIM-swapping claims against T-Mobile on 104 separate days — often with multiple groups claiming access on the same days.
The 104 days in the latter half of 2022 in which different known SIM-swapping groups claimed access to T-Mobile employee tools.
KrebsOnSecurity shared a large amount of data gathered for this story with T-Mobile. The company declined to confirm or deny any of these claimed intrusions. But in a written statement, T-Mobile said this type of activity affects the entire wireless industry.
“And we are constantly working to fight against it,” the statement reads. “We have continued to drive enhancements that further protect against unauthorized access, including enhancing multi-factor authentication controls, hardening environments, limiting access to data, apps or services, and more. We are also focused on gathering threat intelligence data, like what you have shared, to help further strengthen these ongoing efforts.”
While it is true that each of these cybercriminal actors periodically offer SIM-swapping services for other mobile phone providers — including AT&T, Verizon and smaller carriers — those solicitations appear far less frequently in these group chats than T-Mobile swap offers. And when those offers do materialize, they are considerably more expensive.
The prices advertised for a SIM-swap against T-Mobile customers in the latter half of 2022 ranged between USD $1,000 and $1,500, while SIM-swaps offered against AT&T and Verizon customers often cost well more than twice that amount.
To be clear, KrebsOnSecurity is not aware of specific SIM-swapping incidents tied to any of these breach claims. However, the vast majority of advertisements for SIM-swapping claims against T-Mobile tracked in this story had two things in common that set them apart from random SIM-swapping ads on Telegram.
First, they included an offer to use a mutually trusted “middleman” or escrow provider for the transaction (to protect either party from getting scammed). More importantly, the cybercriminal handles that were posting ads for SIM-swapping opportunities from these groups generally did so on a daily or near-daily basis — often teasing their upcoming swap events in the hours before posting a “Tmo up!” message announcement.
In other words, if the crooks offering these SIM-swapping services were ripping off their customers or claiming to have access that they didn’t, this would be almost immediately obvious from the responses of the more seasoned and serious cybercriminals in the same chat channel.
There are plenty of people on Telegram claiming to have SIM-swap access at major telecommunications firms, but a great many such offers are simply four-figure scams, and any pretenders on this front are soon identified and banned (if not worse).
One of the groups that reliably posted “Tmo up!” messages to announce SIM-swap availability against T-Mobile customers also reliably posted “Tmo down!” follow-up messages announcing exactly when their claimed access to T-Mobile employee tools was discovered and revoked by the mobile giant.
A review of the timestamps associated with this group’s incessant “Tmo up” and “Tmo down” posts indicates that while their claimed access to employee tools usually lasted less than an hour, in some cases that access apparently went undiscovered for several hours or even days.
How could these SIM-swapping groups be gaining access to T-Mobile’s network as frequently as they claim? Peppered throughout the daily chit-chat on their Telegram channels are solicitations for people urgently needed to serve as “callers,” or those who can be hired to social engineer employees over the phone into navigating to a phishing website and entering their employee credentials.
Allison Nixon is chief research officer for the New York City-based cybersecurity firm Unit 221B. Nixon said these SIM-swapping groups will typically call employees on their mobile devices, pretend to be someone from the company’s IT department, and then try to get the person on the other end of the line to visit a phishing website that mimics the company’s employee login page.
Nixon argues that many people in the security community tend to discount the threat from voice phishing attacks as somehow “low tech” and “low probability” threats.
“I see it as not low-tech at all, because there are a lot of moving parts to phishing these days,” Nixon said. “You have the caller who has the employee on the line, and the person operating the phish kit who needs to spin it up and down fast enough so that it doesn’t get flagged by security companies. Then they have to get the employee on that phishing site and steal their credentials.”
In addition, she said, often there will be yet another co-conspirator whose job it is to use the stolen credentials and log into employee tools. That person may also need to figure out how to make their device pass “posture checks,” a form of device authentication that some companies use to verify that each login is coming only from employer-issued phones or laptops.
For aspiring criminals with little experience in scam calling, there are plenty of sample call transcripts available on these Telegram chat channels that walk one through how to impersonate an IT technician at the targeted company — and how to respond to pushback or skepticism from the employee. Here’s a snippet from one such tutorial that appeared recently in one of the SIM-swapping channels:
“Hello this is James calling from Metro IT department, how’s your day today?”
(yea im doing good, how r u)
i’m doing great, thank you for asking
i’m calling in regards to a ticket we got last week from you guys, saying you guys were having issues with the network connectivity which also interfered with [Microsoft] Edge, not letting you sign in or disconnecting you randomly. We haven’t received any updates to this ticket ever since it was created so that’s why I’m calling in just to see if there’s still an issue or not….”
The TMO UP data referenced above, combined with comments from the SIM-swappers themselves, indicate that while many of their claimed accesses to T-Mobile tools in the middle of 2022 lasted hours on end, both the frequency and duration of these events began to steadily decrease as the year wore on.
T-Mobile declined to discuss what it may have done to combat these apparent intrusions last year. However, one of the groups began to complain loudly in late October 2022 that T-Mobile must have been doing something that was causing their phished access to employee tools to die very soon after they obtained it.
One group even remarked that they suspected T-Mobile’s security team had begun monitoring their chats.
Indeed, the timestamps associated with one group’s TMO UP/TMO DOWN notices show that their claimed access was often limited to less than 15 minutes throughout November and December of 2022.
Whatever the reason, the calendar graphic above clearly shows that the frequency of claimed access to T-Mobile decreased significantly across all three SIM-swapping groups in the waning weeks of 2022.
T-Mobile US reported revenues of nearly $80 billion last year. It currently employs more than 71,000 people in the United States, any one of whom can be a target for these phishers.
T-Mobile declined to answer questions about what it may be doing to beef up employee authentication. But Nicholas Weaver, a researcher and lecturer at University of California, Berkeley’s International Computer Science Institute, said T-Mobile and all the major wireless providers should be requiring employees to use physical security keys for that second factor when logging into company resources.
A U2F device made by Yubikey.
“These breaches should not happen,” Weaver said. “Because T-Mobile should have long ago issued all employees security keys and switched to security keys for the second factor. And because security keys provably block this style of attack.”
The most commonly used security keys are inexpensive USB-based devices. A security key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by inserting the USB key and pressing a button on the device. The key works without the need for any special software drivers.
The allure of U2F devices for multi-factor authentication is that even if an employee who has enrolled a security key for authentication tries to log in at an impostor site, the company’s systems simply refuse to request the security key if the user isn’t on their employer’s legitimate website, and the login attempt fails. Thus, the second factor cannot be phished, either over the phone or Internet.
Nixon said one confounding aspect of SIM-swapping is that these criminal groups tend to recruit teenagers to do their dirty work.
“A huge reason this problem has been allowed to spiral out of control is because children play such a prominent role in this form of breach,” Nixon said.
Nixon said SIM-swapping groups often advertise low-level jobs on places like Roblox and Minecraft, online games that are extremely popular with young adolescent males.
“Statistically speaking, that kind of recruiting is going to produce a lot of people who are underage,” she said. “They recruit children because they’re naive, you can get more out of them, and they have legal protections that other people over 18 don’t have.”
For example, she said, even when underage SIM-swappers are arrested, the offenders tend to go right back to committing the same crimes as soon as they’re released.
In January 2023, T-Mobile disclosed that a “bad actor” stole records on roughly 37 million current customers, including their name, billing address, email, phone number, date of birth, and T-Mobile account number.
In August 2021, T-Mobile acknowledged that hackers made off with the names, dates of birth, Social Security numbers and driver’s license/ID information on more than 40 million current, former or prospective customers who applied for credit with the company. That breach came to light after a hacker began selling the records on a cybercrime forum.
In the shadow of such mega-breaches, any damage from the continuous attacks by these SIM-swapping groups can seem insignificant by comparison. But Nixon says it’s a mistake to dismiss SIM-swapping as a low volume problem.
“Logistically, you may only be able to get a few dozen or a hundred SIM-swaps in a day, but you can pick any customer you want across their entire customer base,” she said. “Just because a targeted account takeover is low volume doesn’t mean it’s low risk. These guys have crews that go and identify people who are high net worth individuals and who have a lot to lose.”
Nixon said another aspect of SIM-swapping that causes cybersecurity defenders to dismiss the threat from these groups is the perception that they are full of low-skilled “script kiddies,” a derisive term used to describe novice hackers who rely mainly on point-and-click hacking tools.
“They underestimate these actors and say this person isn’t technically sophisticated,” she said. “But if you’re rolling around in millions worth of stolen crypto currency, you can buy that sophistication. I know for a fact some of these compromises were at the hands of these ‘script kiddies,’ but they’re not ripping off other people’s scripts so much as hiring people to make scripts for them. And they don’t care what gets the job done, as long as they get to steal the money.”
Web hosting giant GoDaddy made headlines this month when it disclosed that a multi-year breach allowed intruders to steal company source code, siphon customer and employee login credentials, and foist malware on customer websites. Media coverage understandably focused on GoDaddy’s admission that it suffered three different cyberattacks over as many years at the hands of the same hacking group. But it’s worth revisiting how this group typically got in to targeted companies: By calling employees and tricking them into navigating to a phishing website.
In a filing with the U.S. Securities and Exchange Commission (SEC), GoDaddy said it determined that the same “sophisticated threat actor group” was responsible for three separate intrusions, including:
-March 2020: A spear-phishing attack on a GoDaddy employee compromised the hosting login credentials of approximately 28,000 GoDaddy customers, as well as login credentials for a small number employees;
-November 2021: A compromised GoDaddy password let attackers steal source code and information tied to 1.2 million customers, including website administrator passwords, sFTP credentials, and private SSL keys;
-December 2022: Hackers gained access to and installed malware on GoDaddy’s cPanel hosting servers that “intermittently redirected random customer websites to malicious sites.”
“Based on our investigation, we believe these incidents are part of a multi-year campaign by a sophisticated threat actor group that, among other things, installed malware on our systems and obtained pieces of code related to some services within GoDaddy,” the company stated in its SEC filing.
What else do we know about the cause of these incidents? We don’t know much about the source of the November 2021 incident, other than GoDaddy’s statement that it involved a compromised password, and that it took about two months for the company to detect the intrusion. GoDaddy has not disclosed the source of the breach in December 2022 that led to malware on some customer websites.
But we do know the March 2020 attack was precipitated by a spear-phishing attack against a GoDaddy employee. GoDaddy described the incident at the time in general terms as a social engineering attack, but one of its customers affected by that March 2020 breach actually spoke to one of the hackers involved.
The hackers were able to change the Domain Name System (DNS) records for the transaction brokering site escrow.com so that it pointed to an address in Malaysia that was host to just a few other domains, including the then brand-new phishing domain servicenow-godaddy[.]com.
The general manager of Escrow.com found himself on the phone with one of the GoDaddy hackers, after someone who claimed they worked at GoDaddy called and said they needed him to authorize some changes to the account.
In reality, the caller had just tricked a GoDaddy employee into giving away their credentials, and he could see from the employee’s account that Escrow.com required a specific security procedure to complete a domain transfer.
The general manager of Escrow.com said he suspected the call was a scam, but decided to play along for about an hour — all the while recording the call and coaxing information out of the scammer.
“This guy had access to the notes, and knew the number to call,” to make changes to the account, the CEO of Escrow.com told KrebsOnSecurity. “He was literally reading off the tickets to the notes of the admin panel inside GoDaddy.”
About halfway through this conversation — after being called out by the general manager as an imposter — the hacker admitted that he was not a GoDaddy employee, and that he was in fact part of a group that enjoyed repeated success with social engineering employees at targeted companies over the phone.
Absent from GoDaddy’s SEC statement is another spate of attacks in November 2020, in which unknown intruders redirected email and web traffic for multiple cryptocurrency services that used GoDaddy in some capacity.
It is possible this incident was not mentioned because it was the work of yet another group of intruders. But in response to questions from KrebsOnSecurity at the time, GoDaddy said that incident also stemmed from a “limited” number of GoDaddy employees falling for a sophisticated social engineering scam.
“As threat actors become increasingly sophisticated and aggressive in their attacks, we are constantly educating employees about new tactics that might be used against them and adopting new security measures to prevent future attacks,” GoDaddy said in a written statement back in 2020.
Voice phishing or “vishing” attacks typically target employees who work remotely. The phishers will usually claim that they’re calling from the employer’s IT department, supposedly to help troubleshoot some issue. The goal is to convince the target to enter their credentials at a website set up by the attackers that mimics the organization’s corporate email or VPN portal.
Experts interviewed for an August 2020 story on a steep rise in successful voice phishing attacks said there are generally at least two people involved in each vishing scam: One who is social engineering the target over the phone, and another co-conspirator who takes any credentials entered at the phishing page — including multi-factor authentication codes shared by the victim — and quickly uses them to log in to the company’s website.
The attackers are usually careful to do nothing with the phishing domain until they are ready to initiate a vishing call to a potential victim. And when the attack or call is complete, they disable the website tied to the domain.
This is key because many domain registrars will only respond to external requests to take down a phishing website if the site is live at the time of the abuse complaint. This tactic also can stymie efforts by companies that focus on identifying newly-registered phishing domains before they can be used for fraud.
A U2F device made by Yubikey.
GoDaddy’s latest SEC filing indicates the company had nearly 7,000 employees as of December 2022. In addition, GoDaddy contracts with another 3,000 people who work full-time for the company via business process outsourcing companies based primarily in India, the Philippines and Colombia.
Many companies now require employees to supply a one-time password — such as one sent via SMS or produced by a mobile authenticator app — in addition to their username and password when logging in to company assets online. But both SMS and app-based codes can be undermined by phishing attacks that simply request this information in addition to the user’s password.
One multifactor option — physical security keys — appears to be immune to these advanced scams. The most commonly used security keys are inexpensive USB-based devices. A security key implements a form of multi-factor authentication known as Universal 2nd Factor (U2F), which allows the user to complete the login process simply by inserting the USB device and pressing a button on the device. The key works without the need for any special software drivers.
The allure of U2F devices for multi-factor authentication is that even if an employee who has enrolled a security key for authentication tries to log in at an impostor site, the company’s systems simply refuse to request the security key if the user isn’t on their employer’s legitimate website, and the login attempt fails. Thus, the second factor cannot be phished, either over the phone or Internet.
In July 2018, Google disclosed that it had not had any of its 85,000+ employees successfully phished on their work-related accounts since early 2017, when it began requiring all employees to use physical security keys in place of one-time codes.
DISCLAIMER
* Copyrights belong to each article's respective author.
** Although this page should be free from tracking and other hazards, I can't guarantee that, after you click any links to external websites.
DISCLAIMER
* Copyrights belong to each article's respective author.
** Although this page should be free from tracking and other hazards, I can't guarantee that, after you click any links to external websites.