znajdują się ważne nowości i ostrzeżenia na temat bezpieczeństwa i prywatności na Internecie!
(Cierpliwości – ładowanie tej strony zabiera kilka sekund.)
Tutaj przedstawiam najnowsze wiadomości, ostrzeżenia i porady na temat bezpieczeństwa i prywatności na internecie. Tylko wy sami możecie zadbać o własne bezpieczeństwo i prywatność, a to wymaga wiedzy, strategii i stałej czujności.
Obecnie mamy tylko kanały informacyjne w języku angielskim. Jeśli znasz jakieś polskie lub niemieckie źródła wiadomości na ten temat, prześlij mi ich adresy internetowe, a ja spróbuję dodać je do tej strony.
(Na stronie POLITYKA PRYWATNOŚCI znajdziesz moje zalecenia dotyczące szerokiej strategii ochrony komputera przed hakerami.)
ZASTRZEŻENIE:
It’s a concern for families everywhere: keeping kids safe online. For parents with teenagers, there’s the added complication of trying to balance a child’s safety with their right to privacy. But is online safety just families’ problem?
Policy advocate Stephen Balkam says everyone – including government, technology companies, law enforcement, and individuals – has a role to play. He thinks about these issues a lot as the founder and CEO of the Family Online Safety Institute (FOSI), a nonprofit that brings together government, industry, academia, and nonprofits to innovate around public policy, industry best practices, and digital parenting.
He chatted with 1Password’s Michael “Roo” Fey on the Random but Memorable podcast about how parents should approach online safety with their kids. Balkam also discussed the emerging threats to children’s online safety, parental rights and children’s rights, and how kids can always find a workaround to get online.
Want to learn more? Read the interview highlights below or listen to the full podcast episode.
Editor’s note: This interview has been lightly edited for clarity and brevity. The views and opinions expressed by the interviewee don’t represent the opinions of 1Password.
Michael Fey: What is the Family Online Safety Institute’s mission?
Stephen Balkam: To make the online world safer for kids and their families. We don’t say the word “safe” because there’s no such thing as 100% safe, but we can definitely make it safer. We do it through what we call the three Ps: Policy, Practices, and Parenting.
Enlightened public policy is what we try to persuade our friends on Capitol Hill of, in the state capitals, London, and Brussels. Public policy laws and regulations that are based and grounded in research and not in banner headlines from the Daily Mail or something like that. We work with policymakers on both sides of the aisle. We’re nonpartisan and do our best to encourage the emergence of good legislation.
“We’re nonpartisan and do our best to encourage the emergence of good legislation."
We also talk to the regulators. We sit down with the FTC a great deal, Ofcom in the UK, and eSafety Commissioner Julie Inman Grant in Australia. These are the folks who actually have to enforce the laws as they are created. We work with them and provide a conduit to the technology companies, and vice versa, so there’s better understanding of the work they’re doing.
The second P refers to industry best practices. We work with our members to up their “trust and safety game”, if you will, and act under NDA as constructive critics of their products and services. To that end, we’ve worked with a number of the brand name companies to try to get them to put more resources behind the safety of their products.
The third P is an initiative we call Good Digital Parenting. We take everything we’ve learned from the laws and regulations, add that to the products and services that the tech companies are providing, including filtering tools, security devices, and so on, and translate that into easy-to-use language for parents.
“We have something called ‘The Seven Steps to Good Digital Parenting.’ You can put that on your fridge."
We have something called The Seven Steps to Good Digital Parenting. You can put that on your fridge to remind you to keep talking with your kids, to set ground rules, and to be a good digital role model yourself.
MF: How has your work evolved over the years? And what do you see as the most pressing challenges and emerging threats to children’s online safety today?
SB: When we started we only had two Ps: the policy side and the industry best practices side. Within a few years, we could see there was a real need to help parents. We call it empowering parents to confidently navigate the web with their kids.
All of the issues that people have become familiar with – cyberbullying, sexting, overuse, oversharing, and screen time – these have been really vexing questions over the last decade or so.
I would say over the last year or two, the emergence of generative AI through ChatGPT and other products has just exploded onto the scene and caused a new wave of issues, concerns, fears, and excitement. It’s why we decided to do a year-long research project on it last year.
MF: Tell me more about that. What have the findings been? What did you set out to discover? What was the focus of the project?
SB: We looked at parents and teens in the U.S., Germany, and Japan to find out their experience of generative AI. That incldues their concerns, their biggest fears, their biggest hopes, and just generally their attitudes toward it.
Surprisingly, it was the first time where the kids admitted that their parents knew more about generative AI than they did. Every time we’ve looked at anything from social media, the use of Snapchat, Instagram, and in the early days, Facebook, teens were far ahead of their parents in terms of usage knowledge.
“The kids admitted that their parents knew more about generative AI than they did."
But with GenAI, we found something really interesting. I think it’s because a lot of parents were already using ChatGPT and similar products for their work. And not surprisingly, they were quite concerned about generative AI taking over their jobs, so they really got in deep.
In terms of what parents were concerned about for their own kids, it was that they wouldn’t develop critical thinking skills in the way that they had to, going through school and college and into the workforce. They were concerned their kids would just have their essays written for them by AI.
When we asked teens about their biggest concerns, ironically, given that they’re not in the workforce yet, their biggest concern was whether there will be jobs for them when they do get into the workforce.
“The biggest concern [for teenagers] was whether there will be jobs for them when they get into the workforce."
Also, the use of generative AI tools to create images and videos to cyberbully – that wasn’t a concern for parents, but it was definitely one for teens. That’s a huge concern if you’re still at school.
MF: FOSI aims to create a culture of responsibility in the online world. What role do you see individuals, tech companies, and policymakers playing in fostering that safer digital environment for children?
SB: If you can envision a large circle, at the top of the circle would be government. Government definitely has a role to play in setting the rules for what is allowed and not allowed online.
It’s a complicated role, particularly in the United States, where we have the First Amendment. We have this tricky balance between rights of privacy and safety. It’s not easy legislating in this space but the government has a role to play in providing a legal framework and to urge folks to do more and better in this space.
“The government has a role to play in providing a legal framework and to urge folks to do more and better in this space."
Law enforcement is also part of this picture and part of the circle. For the really heinous stuff, we need well-resourced law enforcement to go after the bad actors. In many cases, law enforcement does not have the resources it needs, but even so, it’s part of the picture.
It’s also not acceptable for industry just to put out tools and products and services without thinking about online safety. They definitely have a role to play. When I go and talk to VCs, I say: “It’s great you have a gifted CEO and a fabulously skilled CTO, but who’s your chief online safety officer? Let’s make sure you bake that in.” Safety by design, if you will.
Parents, teachers, even the kids themselves, have a responsibility for maintaining safety online. We encourage parents to use parental controls. When kids hit high school, the emphasis shifts to being more of a co-pilot with your teen and working with them so that they utilize the online safety tools that have been created for them – to report, block, be private, and in many ways, shape or administer their online lives.
“When kids hit high school, the emphasis shifts to being more of a co-pilot with your teenager and working with them so they utilize the online safety tools that have been created for them."
And then teachers, of course, have a huge role to play in terms of giving online safety advice or lessons and modeling how to be not just safe, but civil online as well.
MF: There seems to be a real interplay between parental rights and children’s rights at the moment. Can you talk about that?
SB: I should have said right at the front that FOSI is an international non-profit. What I often notice in Europe is there’s a far greater emphasis on children’s rights and teens’ rights to access content, gather online, and express themselves. And also a right to be safe when they’re online. Here in the U.S., we tend to emphasize parental rights, and that often has pretty heavy connotations with it, particularly in certain states.
Parents, particularly those who have younger children, absolutely have the right and the responsibility to keep their young kids safe online and use parental controls. But things shift in the teen years. Kids, at some point or another, start to have rights themselves, including rights of privacy and a right not to be surveilled by their parents while they’re online.
“Kids, at some point or another, start to have rights themselves, including rights of privacy and a right not to be surveilled by their parents while they’re online."
Are we saying that kids, until they’re 18, have zero rights? And then, once they hit 18, inherit 100% rights? Or is there a gradual curve upwards? Not surprisingly, our organization argues that kids have rights as they age, and it’s a gradual curve.
It’s not an easy thing. It’s not something you can point to and say: “Absolutely this is the point at which they have X, Y, and Z rights.” But it is a commonsensical thing and also a realization that 15-, 16-, 17-year-olds will have the ability to circumvent whatever you try and put in their way.
MF: How does FOSI educatie parents about online safety? What are the key principles or tips you have for parents?
SB: We developed the seven steps to condense all of our various messaging and advice. It boils down to: Talk to your kids. That talk should be done early and often.
When I say early, I mean as young as kindergartners. They can understand the word “bad”, they can understand the word “danger”, they can understand concepts like: “We’re not going to let you have this whenever you want it. There will be times when you can have it and times when you can’t. We’ll also set up some rules where there will be consequences if you misbehave.”
“Talk to your kids. That talk should be done early and often."
Laying all that out early is absolutely critical so the kid knows that when you act, you’re not doing it unfairly. It’s based on stuff you’ve already talked about. But it’s an ongoing conversation. You’re going to have to do it almost on a yearly basis.
Back to school is the time that we often suggest as a good time. “Look, you’re now going into third grade. We’re getting you this gizmo watch so that you can contact us and we can contact you, but no, you’re not getting a phone.”
Also, milestones, like: “You’re turning 13, you’re now legally able to go on to various social media sites, but maybe we’re not going to. We want to discuss each one in turn.”
And at 14 or 15, sitting down with them before they go back to school: “Now show me how you report something on Snap. Tell me how you’re remaining private on Instagram.” This co-pilot concept is about working with your kid to make sure they’re utilizing the tools that are there for them rather than you trying to lock everything down. So that’s number one. Talk with your kids.
Number three is use parental controls. We talked about that before.
Number seven, probably the most important, is to be a good digital role model yourself. The top complaint I get from kids when I work in schools is: “I can’t get my parents' attention. My mom is always on Facebook. My dad is always checking his email.” Put your own screens down and give your kids face time.
“The top complaint I get from kids when I work in schools is: “I can’t get my parents' attention. My mom is always on Facebook. My dad is always checking his email."
We talk about tech-free zones in the house. A tech-free zone includes the bedroom. We’re not fans of screens in kids' bedrooms. No screens at the table if you sit at the table for a meal. Tech-free time zones, so maybe you have a 9:00PM or 10:00PM curfew where everyone puts their devices in a closet to charge up overnight.
We say to parents at PTA meetings: “Raise your hands if you use your phone as an alarm clock.” And almost everyone’s hands go up. The next thing I say is: “Don’t. Don’t use your phone as an alarm clock.”
“Little kids love to jump in your bed in the morning. They’ll see that blue haze on your face and they’re going to want the same thing."
Because it’s the last thing you’re going to look at when you’re going to bed. It’s also the first thing you’re going to look at, and sometimes even before you’re brushing your teeth, you’ll be checking your email and your texts and the weather. And if you have little kids, they love to jump in your bed in the morning. They’ll see that blue haze on your face and they’re going to want the same thing. Kids will do what you do rather than what you tell them to do.
MF: Do you think that teenagers are often neglected in the conversation around online security and almost seen as something to be managed instead of someone to be included?
SB: Oh, for sure. That’s why whenever we can, we include teenagers in our surveys, in our research. It’s extremely important to hear from them because it’s their lived experience that will inform public policy, as well as the products and services that tech companies build.
MF: Where can people go to find out more about you, the Family Online Safety Institute, and the incredible work that you’re doing?
SB: Our website is fosi.org. We’re also on LinkedIn, X, Instagram, all the usual places. And we have a YouTube channel. You’ll find a number of quite amusing videos with actual parents and kids illustrating the seven steps.
Listen to the latest news, tips and advice to level up your security game, as well as guest interviews with leaders from the security community.
Subscribe to our podcast
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) recently announced that they are investigating a major breach at Sisense, a business intelligence company.
As a result of the breach, it is critical that Sisense customers take action immediately to minimize the impact of any breached credentials. Here is a quick overview of what happened, and a look at what needs to be done to secure your developer secrets to protect against follow-on data breaches.
According to reporting by Brian Krebs, attackers gained access to Sisense’s self-hosted GitLab environment. From there, they found an unprotected token that gave them full access to the company’s Amazon S3 Buckets. Once they had full access to the company’s cloud environment, they were able to copy and exfiltrate several terabytes of customer data, including millions of access tokens, passwords, and even SSL certificates.
Exact details have not been published, however, it appears that over 1,000 companies (and possibly over 2,000) may have been impacted, ranging from startups to global brands. The company serves businesses in the finance, healthcare, retail, media & entertainment, software and technology, and transportation industries.
While the initial breach is severe on its own, it’s the potential for downstream attacks on companies and consumers that likely has CISA concerned. The stolen credentials could give the attackers access to additional cloud environments containing consumer information as they move downstream from their initial target to Sisense’s customers. Many of these credentials – SSL certificates, SSH keys, and API tokens – exist for an extended period of time by default. As a result, it is imperative that Sisense customers take action to secure their developer credentials.
Sisense has shared guidance with their customers about the types of credentials to rotate, including but not limited to account passwords, single sign-on (SSO) client secrets, database credentials, Git credentials, API tokens, and SSL certificates. Impacted customers should:
Even if you were not directly impacted by the Sisense breach, it’s important to review your security posture, especially when it comes to developer secrets and devops environments. As we’ve written about in the past, businesses of all sizes struggle to protect developer secrets. Even sophisticated security and engineering organizations can fall victim to secrets leaks.
Here are some steps you can take to secure developer credentials:
Despite the privileged access developer secrets provide, they often do not have the same degree of protection as passwords, especially since IT and security teams can lack visibility into the health of these credentials. These types of developer secrets should be secured with end-to-end encrypted storage, like an enterprise password manager (EPM).
It’s too easy to accidentally commit a secret, even if it’s added to an environment configuration file. The best defense is to use secrets references that can be replaced programmatically at run time.
Although it is more effective to address the root causes of developer secrets leakage, businesses and organizations should inspect Git commits as a last safety check to make sure credentials are not accidentally committed to shared repositories. GitHub recently announced that they have turned on push protection for all public repositories, but this feature needs to be applied to all repositories, public or private, cloud or self-hosted.
Amazon S3 Block Public Access can help you make sure that your Amazon S3 buckets don’t allow public access. As of April 2023, block public access is turned on by default for all new Amazon S3 buckets. For any created prior to April 2023, the setting should be configured for your AWS accounts or within individual Amazon S3 buckets. Another preventative security measure for Amazon S3 buckets is to use IAM Access Analyzer to regularly monitor which buckets (and other resources) are accessible outside your account or AWS environment.
While organizations must react to this breach, the most effective solution to this type of breach is to implement the practices outlined in this post to secure developer secrets. To that end, 1Password provides an enterprise password manager (EPM) that secures developer secrets while simplifying the complexity of developer workflows.
1Password’s offerings provide critical secrets management functionality to prevent breaches caused by developer credentials, and are available in all 1Password plans:
Store SSH keys, API tokens, database credentials, and more in 1Password’s end-to-end encrypted vaults. Use 1Password to generate, store, and biometrically authenticate SSH connections so SSH private keys are never saved as plaintext on your local disk.
Use the 1Password VSCode Extension to find secrets in your code as your work, one-click save them to 1Password, and then replace them with a secrets reference.
Integrate 1Password with your CI/CD pipelines (GitHub Actions, CircleCI, and Jenkins) and infrastructure as code (IaC) tools (Kubernetes, Terraform, Pulumi, Ansible) to programmatically replace secrets at runtime.
While it’s not possible to prevent 100% of breaches, it is possible to empower software engineering teams and other employees with the tools they need to keep secrets safe.
You can get started with a 14-day free business trial, or by visiting our developer docs to learn more about how you can secure developer secrets.
Streamline how developers manage SSH keys, API tokens, and other infrastructure secrets throughout the entire software development life cycle with 1Password Business.
Try free for 14 days
This is the final post in a series about shadow IT. In this series, we’ve detailed how and why teams use unapproved apps and devices, and cybersecurity approaches for securely managing it. For a complete overview of the topics discussed in this series, download Managing the unmanageable: How shadow IT exists across every team – and how to wrangle it.
We all use passwords and other secrets to access things at work. It’s the IT team’s responsibility to secure those secrets. For most departments, secrets management needs are simple: They sign in to apps and websites with passwords, or passkeys, or sometimes with multi-factor authentication.
But developers have unique workflows and secrets management needs.
The types of secrets developers manage every day include SSH keys, database and API keys, server credentials, and other encryption keys. These keys power authentication methods developers use every day to access systems, integrate applications, securely transfer files, and more. To complicate matters, developer secrets often live outside IT’s purview.
That means developers are often left to manage secrets themselves, but that scenario can create serious risks for companies. A 2023 GitGuardian study revealed that in just one popular open-source repository used by developers, nearly 4,000 unique secrets were exposed across all projects. Of those unique secrets, they found 768 were still in active use. Separately, in the first two months of 2024, GitHub reported it found more than one million leaked secrets on public repositories, which translates to a rate of about 12 secrets leaked per minute during that time. That’s a lot of leaks!
Secrets management, in other words, is a growing problem. To make matters worse, the typical shadow IT concerns that plague non-developer teams apply to developers, too. That is, the passwords and credentials they use to sign in to apps and websites may not be secure – and IT may not even know about it.
The challenge, should IT and security teams choose to accept it: Secure encryption keys and other developer secrets no matter which apps and tools are being used – and do it without adding friction to already complex workflows.
Due to the nature of their roles, developers building software products have direct access to key systems and sensitive data. In addition, they need to work with secrets directly in their terminal, code editor, and deployment pipelines. Engineering teams may also need to share secrets for different applications, or when configuring their development environments.
To streamline this process, developers sometimes store secrets somewhere convenient in plaintext, or hardcode them into the source code while working. Either of these scenarios – not exactly secrets management best practices – can lead to data breaches or compromised systems.
The growing number of different tools and cloud environments developers use to do their work has made secrets management more difficult. A 1Password report revealed that 50% of individual contributors in IT or DevOps roles admit they’re storing secrets in more locations than they can count. 25% of companies said their secrets are stored in 10 or more locations.
And while the IT team has traditionally been responsible for managing passwords, IT teams often lack visibility and control over developer secrets like SSH keys and API tokens. This seems to be the norm: approximately 80% of companies surveyed by 1Password said they didn’t manage their secrets well, and 60% have experienced secret leaks. In fact, 75% of developers admitted they had access to sensitive information like a former employers’ infrastructure secrets(!).
Why is it so hard for developers to secure secrets like SSH keys and database credentials? Security and productivity are often in tension. One survey found that 73% of developers agree that the work or tools their security team typically requires them to use interfere with their productivity and innovation.
Each cloud provider, application, server, database, or other tool a developer uses typically requires separate authentication – and might require learning specialized tooling for that environment. Authenticating for multiple tools can interrupt workflows, slowing developers down – which can be unacceptable for teams trying to deliver projects on tight deadlines.
As a workaround, sometimes developers store credentials insecurely or take shortcuts to enable faster access. Lacking a secure, productivity-friendly alternative, this is how you end up with hard-coded credentials.
In addition to taking shortcuts, lack of education around proper secrets management has allowed insecure habits to form, including:
When developers share secrets using unencrypted email messaging apps, manually set up system configurations on their local device to run a program, or manually copy sensitive values to connect to another machine, those secrets are not secure.
As we detailed in the last post in this series, a first step to wrangling shadow IT across all of your company’s departments is understanding employees’ responsibilities and workflows. This helps IT and security teams identify not only where employees may be using shadow IT to help them in their jobs, but why they’re using it.
Employees often use shadow IT to improve their productivity – to work around something that’s holding them back from doing their best work, on schedule. This is especially important to understand for the engineering team.
The question is how to secure developer workflows while simultaneously streamlining them. Secure credential management for developers can be trickier than it is among non-developers, because the workflows are more technical, so the fix requires a more bespoke solution. Implementing single sign-on (SSO) as part of an identity and access management (IAM) framework can go a long way to securing non-development workflows, but they don’t typically address developer needs.
The good news is there is arguably more opportunity within developer workflows to secure credentials and reduce friction than with other teams. It’s not particularly convenient for developers to generate SSH keys manually every time, or to store SSH keys on their local drive, or to store plaintext secrets in code. These (insecure) methods are just the way things have always been done – but they’re certainly not without friction.
However, it can be difficult for IT and security admins to know where to start, because they’re less familiar with developer workflows. That being the case, a good first step is to try to understand developers’ unique secrets management use cases. For example, it may be helpful to understand that each developer starts their day with a ‘git pull’, or why they have to google the ssh-keygen
command every time they need it (because it’s so complicated).
To find points of friction, pinpoint where developers may be taking shortcuts with secrets management, and where shadow IT may be lurking, it can help to ask questions like:
Once you gather this information, then what? It’s not realistic to try and monitor all the ways developers may be sharing secrets or prevent employees from using shadow IT (you’ll be engaging in an unwinnable game of whac-a-mole). The only practical way forward is to put effective secrets management tools in place so developers can use the platforms they want, but in a secure way.
How do you do that? For starters, look for tools that use automation to eliminate the possibility of human error. That should make it easier to get buy-in too: Developers will never object to removing friction from their workflows, especially when you can automate tedious tasks in the software development lifecycle and lessen their workload.
For developers using SSH keys, for example, you can implement an enterprise password manager (EPM) like 1Password that supports secure secrets management for credentials like SSH keys in a way that fits seamlessly into developer workflows. In addition, an EPM with secrets management features can help developers securely work with API tokens, application keys, and other credentials where they need them – in their terminal and code editor. That means both stronger security and increased productivity.
To learn more about shadow IT and how IT teams can adapt to evolving workplace challenges in a hybrid environment, catch up with three previous posts in this series:
For a complete overview of the topics discussed in this series, download the eBook, Managing the unmanageable: How shadow IT exists across every team – and how to wrangle it.
Learn why teams like finance, marketing, HR, and engineering use shadow IT, the security vulnerabilities that can follow, and how to manage it all.
Download now
There’s one question our Security team hears more than any other: Is my 1Password data vulnerable if my device is compromised or infected with malware?
A compromised device involves full control or visibility at the system level1, and password managers like 1Password store data that’s accessible to the system — that’s how they function. In fact, that’s how most typical apps are built.
The short answer is: Yes, your secrets are vulnerable to an attacker who’s fully compromised your device, however unlikely that situation may be. And let me be clear that if you’re an everyday internet citizen who browses securely and maintains their devices, worrying about such local threats is probably unnecessary. The longer answer is nuanced, as they so often are, and presents an interesting paradox.
So, let’s explore that paradox, then dig right into local threat protections in 1Password. After our deep dive, I’ll reveal the crucial non-security consideration involved in our threat-mitigation approach, and explain how the 1Password team strikes an incredibly delicate balance.
Keeping information safe on your devices is essentially the reason password managers were created in the first place. Your password vault is a much more secure alternative to spreadsheets and word-processing files floating around because your data is encrypted at rest (on your device).
That means the information you store in 1Password is most secure when 1Password is locked. Attacks on your locked data - like guessing your account password or trying to find an unpatched cryptographic flaw – are passive attacks.
But keeping 1Password locked at all times flies in the face of everything else our product is known for: convenience, security on the go, ease of use, adding efficiency to your workflows, and more. It’s also not realistic because as consumers, we typically choose products we can make use of, right?
Well, using 1Password means the possibility of active attacks.
Active attacks occur when malware targets 1Password as the app is running or being unlocked. An attacker can attempt to steal your credentials as you provide them; they can also steal secrets while the app is open and unlocked. Active attacks are the larger concern for our security and development teams. They’re also the hardest to guard against.
And there’s no one-size-fits-all solution. Our approach, for example, is a bit contradictory.
We face a challenge that’s incredibly common throughout our industry: Protections are largely specific to each platform, operating system, and environment because each has its own security boundaries.
Given the varied conditions and guardrails, the protections we can build differ, and depend on the platform and type of threat we’re addressing. We have to exclude many local threats from our threat model for that reason, and often reject related bug bounty reports. We implement platform-specific protections where we can but are often limited by the operating systems themselves.
Yet we always do our best to protect your data from local attacks, and often accept reports of missing local protections we can add without negatively impacting performance, other security considerations, or the customer experience.
When 1Password is locked, we make sure your vault contents are encrypted so they’re impenetrable, even to someone with root access to the device. We accomplish this with traditional 1Password accounts by storing the secret that’s required to decrypt the vault contents, your account password, in your mind — a location presumably inaccessible to attackers. Accounts protected by SSO and passkeys rely on security features built into the device.
We also do everything possible to protect against same-user privileged access — a term for malware that runs on a computer with the same permissions you have, and lacks the ability to elevate its privileges.
We can prevent such attacks from targeting open and unlocked versions of 1Password so it can steal your information on devices with specific Linux distributions (Wayland), macOS, Android, and iOS. There are protections in place for Windows systems, as well, but Windows apps are protected in such a way that an anti-malware solution is required to protect against processes that try to debug other applications.
While we account for same-user threats, it’s important to acknowledge this kind of malware is always capable of phishing or otherwise misdirecting you to a fake version of 1Password. Let me reiterate that safe browsing and a secure device are always the first lines of defense.
There’s one other category of security threats we take into account as we fortify 1Password: forensic analysis.
Maybe the would-be attacker has physical access to your device or exploits a vulnerability. However it happens, there are plenty of tools available that allow someone to view your secrets if they get their hands on a copy of your drive (disk) or memory (RAM).
To protect your secrets, we prevent your vault contents from ever hitting the disk unencrypted. And when you unlock 1Password the traditional way, the keys to decrypt your data are unavailable via forensics alone — the account password remains with you.
When you use SSO or a passkey to unlock your 1Password account, your vault information is only accessible if the forensics can gather other data to facilitate a successful SSO or passkey authentication, and that depends entirely on your SSO configuration and local storage protections.
We minimize exposure of your secrets in memory by attempting to clear the 1Password apps of sensitive data, and minimizing the amount and types of data in memory while the apps are unlocked. It’s difficult to guarantee absolute clearance when we talk about things remaining in memory, but we aim to maintain the highest possible level of security hygiene.
While these local protections cover a number of threats at the forefront of our threat model, there’s one local threat without any viable defense.
1Password lacks the ability to protect against an attacker who’s gained full control over a device with administrative or root privileges. But there’s an important fact to acknowledge here: In this case, 1Password is far from unique.
There’s no password manager or other mainstream tool with the ability to guard your secrets on a fully compromised device.
It’s simply a limitation of the operating systems 1Password runs on: There’s no way to isolate an application to sufficiently limit the damage malware is capable of inflicting. An application can be an annoyance, but there’s no amount of annoying that will stop a determined attacker.
At the end of the day, local threats present a number of issues we’re unable to reasonably address. And that’s the very reason we’re forced to exclude them from our threat model and reject many related bug reports. While we’re unable to defend against a full compromise, we use every option available to make it difficult for local threats to access your secrets.
But there’s a critical balance we have to consider: protection and usability. Many mitigations that make the lives’ of local threats more difficult, make your life more difficult, too.
Runtime Application Self-Protection frameworks, for example, would allow us to make even root level attackers suffer. But these third-party products often have serious performance, reliability, and privacy considerations. The implications are serious enough that we’ve decided not to use them.
When security restrictions clash with convenience and we have to make choices, we’ll always choose to give your secrets the best fighting chance. And when that approach is layered with nearly impenetrable cryptography, same-user defenses, and the minimization of secrets in memory, we find ourselves with the deeply secure design of a thoughtfully secured password manager.
Thank you to the following contributors:
To clarify, full control of the device is not relegated to physical control, but access with administrative or root privileges. ↩︎
Who’s responsible for regulating technological change in a democracy?
Verity Harding, a globally recognized expert in AI technology and public policy, and one of Time Magazine’s 100 most influential people in AI, thinks anyone – with any level of technological knowledge – can have a valid opinion about AI. After all, it may not be technological knowledge that helps us make the best decisions around how we want to use AI as a society.
Harding, who is currently the director of the AI and geopolitics project at the Bennett Institute for Public Policy and author of the book, AI Needs You, How We Can Change AI’s Future and Save Our Own, talked with Michael “Roo” Fey, Head of User Lifecycle & Growth at 1Password on the Random but Memorable podcast about technology policy and ethics.
To learn more, read the interview highlights below or listen to the full podcast episode.
Editor’s note: This interview has been lightly edited for clarity and brevity. The views and opinions expressed by the interviewee don’t represent the opinions of 1Password.
Michael Fey: Tell me about the book.
Verity Harding: I wanted to make sure I actually added something new to the AI debate, because obviously it can get a bit old and tired sometimes. People have given me lovely feedback that what I have in there is really new.
MF: Actually, before we dig too much into the book, can you give a little background on yourself and what led you to writing something like this?
VH: It’s an odd journey I had to AI. I studied history at university and the earliest part of my career was spent in politics. I was the political advisor to the then Deputy Prime Minister, Nick Clegg, who’s now president at Meta.
It was really my experiences in politics that ended up leading me to technology. I worked quite heavily on a piece of legislation in the UK that was national security related. It was about updating the powers of the security services in the UK for the digital age. Obviously, that’s an extremely controversial and difficult subject, and it was very fraught in the UK with lots of different opinions on whether it was too much overreach from the government.
What it made me realize was that there was this huge deficit in terms of knowledge about technology between the technologists and the political class who are responsible for regulating this technology for society.
“There was this huge deficit in knowledge about technology between the technologists and the political class."
I felt that this gap was not good and that there needed to be more people who could speak both languages – the political language and the technological language. Because of course, technology is extremely political. I eventually ended up joining Google and I was head of security policy in Europe, the Middle East, and Africa (EMEA) and also head of UK and Ireland policy, which was a fantastic experience.
Funnily enough, in the time between me leaving government and joining Google, the Edward Snowden revelations happened. That subject, which was already fraught, became even more fraught. We had to do a lot of work at Google, educating and explaining and helping politicians learn more about what digital privacy, security, human rights, and civil liberties on the internet really meant.
While I was at Google, the company acquired DeepMind, which is a British AI lab. I got to know the CEO and founder, Demis Hassabis, who’s a really visionary and inspirational scientist himself. I learned more from him about AI.
It was clear to me that all of the subjects that I cared most about when it came to technology policy were going to be made immeasurably better or worse by AI, depending on how we managed to navigate it. I wanted to be part of making sure that it went down the better route and not the worst route.
“It was clear to me that all of the subjects that I cared most about when it came to technology policy were going to be made immeasurably better or worse by AI."
I moved to DeepMind and was one of the really early employees there. I co-founded all of DeepMind’s policy and ethics and social science research teams, as well as things like the Partnership on AI, which is an independent, multi-stakeholder organization of tech companies and different businesses and civil society groups and academics looking at the societal impact of AI.
All of this led me eventually to writing this book because I felt that I’d had this really privileged, up-close view and perspective on AI. I wanted to be able to share that more broadly. This book is really everything I’ve learned from all of that experience.
MF: You’ve been part of the AI conversation or a long time. At what point did you start writing this book? Did the launch and popularity of ChatGPT change the trajectory of your book?
VH: It’s true, I’ve been involved in it for a really long time.
What’s so funny is that when I moved from Google to DeepMind to work on AI policy, I was thinking, well, this is going to be a much quieter life. Because at Google we were right in the thick of many news cycles – as I said, the Snowden revelations were causing a huge amount of press coverage.
I also covered other issues at Google, like online radicalization and hate speech that were also getting a huge amount of attention. Going straight from politics into dealing with media stories and being involved in the constant 24/7 news – it’s quite exhausting.
Nobody was talking about AI at all, so I thought, well, this will be a lot quieter and I’ll have time to do the deep thinking and not be fire fighting every day.
Demis offered me the job when he was in the car on the way to fly to South Korea. That’s where AlphaGo happened, which created a huge amount of interest and everything really blew up straight away, so I didn’t ever get that quiet life.
When I started writing the book, I would say that the media coverage and attention around AI had started to dip a little. It was a surprise to all of us in AI that ChatGPT had the effect that it did. We all knew about these capabilities already, but something just connected and hit, and you never can quite tell when that will happen. It brought AI crashing into the limelight.
“ChatGPT brought AI crashing into the limelight."
I had either finished or was very close to finishing the book when that happened. But because I already knew about generative AI, I had written about it quite a lot in the book already. It was something that I was concerned that politicians – and society more broadly – weren’t grappling with.
Before ChatGPT we had already been warning about the possibility for deep fakes to mess with our democracy and undermine truth. We hadn’t seen much response to that, really. So, my book already covered all of those kinds of issues.
I didn’t have to change it much. I did decide to alter it a bit and include more on ChatGPT specifically, just because I think that made it easier to get my argument across. Before, I had to explain from scratch what generative AI is.
It was very helpful that ChatGPT enabled me to have this shorthand that made me pretty sure that anybody who picked up the book would know straightaway what that was.
MF: What are the most pressing concerns or misconceptions people have around AI?
VH: There’s no right and wrong answer about what people should or shouldn’t be concerned about when it comes to AI.
That’s what I say in the book: that everybody will have an opinion and everyone has a right to an opinion. Their opinion is no less or more valid based on the depth of their technological knowledge. And indeed, sometimes technological knowledge won’t help make a decision about whether we’re happy with AI being used in certain aspects of society or not.
I think one common misconception is that, if I don’t understand the deep technology and detailed technological side of AI, then I don’t have a right to have an opinion. I think there’s quite a lot of gate-keeping that happens in AI and it encourages people not to get involved.
“There’s quite a lot of gate-keeping that happens in AI and it encourages people not to get involved."
That’s partly why I wrote the book – to say, in a democracy, you do get to have a say and you can educate yourself to an extent, but you don’t need to be the world’s leading research scientist to be able to have that say.
I also personally find the conversations around AI causing human extinction very unhelpful. I don’t think that that’s an appropriate way to think about this new technology. I think that it tends to obscure some of the more pressing concerns, and it tends to obscure some of the more exciting potential, too.
We’ve ended up in quite an odd position with AI. Back when I started at DeepMind, I was very keen that we would shift the conversation from AI as Terminator, AI as Skynet and towards AI as a tool. The things to be worried about should be more realistic; things like bias and accountability and security and safety. And I think probably the latest hype cycle has not contributed to calm common sense when we’re talking about it.
MF: Is one of the driving factors around the release of this book trying to bring a more stable, measured approach to the conversation?
VH: That wasn’t the motivation. The motivation was really that I felt I had something to contribute, something new to say. The bulk of the book is these examples of transformative technologies of the past.
I think coming from both a history training and a political background, I was very conscious that the tech industry is not known for its humility and likes to think everything it’s doing is the first time anyone’s ever done anything. But while AI is new, invention is not new, and progress is not new, innovation is not new. I really had this hunch that there would be things that we could learn to help guide us with the future of AI.
I feel very strongly that it’s an extremely important and exciting technology. I don’t mean to diminish its importance by saying that I don’t think that it will cause human extinction, but that’s not to lessen the need to pay real attention to its power. I felt that we weren’t looking enough to the past and what we could learn.
I suppose the other motivation was, I really believe in democracy. It’s not necessarily always the most fashionable thing, but I think policymaking is hard graft. It’s difficult and it can be a slog and it can be boring, certainly not the sexiest thing to talk about, but it’s really important.
“We’ve managed great technological change before and I’m really confident that we can do it again."
Someone who read the book said to me just yesterday that they really got a sense from it that AI was important, but they also got a sense that humans were pretty great too. I liked that feedback because hopefully that does come across.
I feel that AI is important and it’s great, but we have done this before. We’ve managed great technological change before and I’m really confident that we can do it again.
Listen to the latest news, tips and advice to level up your security game, as well as guest interviews with leaders from the security community.
Subscribe to our podcast
This is the third in a series of four posts about shadow IT, including how and why teams use unapproved apps and devices, and approaches for securely managing it. For a complete overview of the topics discussed in this series, download Managing the unmanageable: How shadow IT exists across every team – and how to wrangle it.
Until recently, companies have been able to exert pretty comprehensive control over security and how people work – in an office, at a desk, with a desktop computer, and using company-provided software and servers.
But the days of protecting clearly defined perimeters from the threat of cyber attacks with strong network security and unforgiving firewalls are, for most companies, gone.
Today, thanks to hybrid work, the situation can be very different. Many companies have limited insight into where or how their employees are working. In the park? On a mobile device? Laptop? Using any number of apps and tools? Cybercriminals are taking advantage of the confusion.
This reduced control makes it imperative for information technology (IT) and IT security teams to understand where and why employees are using shadow IT, so they can find ways to protect employees from security threats no matter how or where they work.
Employees typically use shadow IT to be more productive. A great analogy for shadow IT is something called the “desire path” – a term landscape architects use to describe the shortcut footpaths pedestrians carve into public spaces that get them from point A to point B faster than “official” or paved walkways. (You’ve seen them. They’re the dirt paths that cut the corner on the way to the train station or shorten the walk from the parking lot to the playground, through the flower bed.)
Security solutions should secure that desire path. This means understanding departments’ responsibilities and workflows, and where employees may be using shadow IT to help them in their jobs. Don’t expect the paths to look the same, department to department. Shadow IT shows up differently across teams because it’s used to support distinct business operations, roles, and responsibilities.
IT and cybersecurity teams need to operate a bit like detectives to discover employees’ desire paths. You might be surprised to find shadow IT desire paths crisscrossing every department in your company.
Trying to stop the use of shadow IT and forcing employees to stick to the “official path” of company-approved tools isn’t a particularly effective strategy. The most realistic and effective shadow IT security strategy is to secure the desire path for each individual employee, so they can use shadow IT securely.
In other words, to protect against the risk of security breaches, embrace shadow IT – and secure it.
The finance team is typically high on the security team’s list because they literally has the keys to the bank. The finance team handles critical financial data such as the company’s banking credentials, and sensitive information like audit reports and financial reporting.
Sometimes finance employees need to share sensitive documents with external partners like investors, board members, or auditors. And if they do that through insecure channels like email or SMS, it could open the door to unauthorized access.
Typical finance team workflows and responsibilities include:
With these finance team workflows in mind, where might shadow IT be lurking? Some typical information security vulnerabilities to investigate include:
The human resources (HR) team handles confidential employee information every day in its efforts to hire, develop, and retain talent for the company. HR also ensures the company is compliant with benefits administration and labor laws. In addition, they focus on creating and implementing employee management strategies, managing training and development programs, and fostering a positive workplace culture.
Typical HR team workflows and responsibilities include:
Based on these workflows, here are some areas where you may find vulnerabilities due to shadow IT lurking in HR:
The marketing team handles more sensitive data and information than you might expect. This might include campaign spending and reporting data, as well as customer information.
They also are on the front lines of social media and may be using multiple platforms or apps for customer support or top-of-funnel customer acquisition. As the guardians of your company’s brand reputation, it’s critical that marketing’s accounts aren’t compromised.
Typical marketing team workflows and responsibilities include:
Knowing marketing’s responsibilities, it can be useful to check the following for information security risks and shadow IT use:
Once you’ve identified the shadow IT desire paths for each team, then what? In terms of security measures or security tools, it’s most important for security professionals to secure credential sharing, as well as standardizing and securing access to apps and tools.
You can secure authentication, password management, and credential sharing using an enterprise password manager (EPM), which provides teams with a centralized solution to use, access, and share sensitive company data. It’s important that the EPM provides role-based access controls to ensure that users adhere to your company’s cybersecurity policies to defend against data breaches, cyberattacks like ransomware, and social engineering attacks like phishing.
EPMs can help you make the easy way to work the secure way to work. For example, EPMs can autofill time-based one-time passwords (ToTP) in addition to standard passwords. That enables security teams to require multi-factor authentication for providers that offer it, while at the same time streamlining the sign-in flow, rather than adding friction to it.
To learn more about shadow IT and how to secure it to reduce risk of security incidents, stay tuned. Now that we’ve covered what to look for in teams like HR, finance, and marketing, next we’ll discuss the unique needs of developers.
Learn why teams like finance, marketing, and HR use shadow IT, the security vulnerabilities that can follow, and how to manage it all.
Download now
What’s good for business is often bad for security. That’s the inescapable conclusion of the 1Password State of Enterprise Security Report this year.
Here’s the backdrop, and it should be familiar by now: Work has, slowly and then all of a sudden, expanded. No longer confined to the office ecosystem, work happens in coffee shops and at home and at the airport, on company-provided laptops and the shared computer in the living room, on the family iPad and the phones in our pockets.
All that work leaves a residue of (often sensitive) data as it flows through managed apps like the company productivity suite and unsanctioned apps like the file-sharing service that a handful of people use, unbeknownst to IT.
With the explosion in the number of apps used for work, it’s a good time for employee productivity, and artificial intelligence (AI) has entered the picture to boost output even further. But IT and security teams are struggling to keep up, especially when they’re constrained by limited resources.
In the 1Password report, Balancing act: Security and productivity in the age of AI, we surveyed 1,500 white-collar employees in North America, including 500 security professionals. What emerged from our findings is a tension between productivity and security that has taken on a new urgency.
Let’s start with the growing pressure on employees to be productive.
More than a third of workers (34%) use unapproved apps or tools to get things done. This is shadow IT, and its use won’t come as a surprise to security professionals.
But the scale of the problem might. Of that 34% who use shadow IT, each employee uses an average of five unapproved apps or tools. In a company of just 300 employees, that’s more than 500 potential new threat vectors.
The problem is most pronounced in the tech industry, with nearly half of employees saying they use shadow IT, compared to 40% of employees in finance, 27% in healthcare, and 19% in education.
Security teams are trying to keep up. 92% of security pros say their company requires IT to approve software that’s used for work. But 59% say they have no control over whether employees follow those information security policies.
That visibility is more achievable if employees use only work-provided devices, which 84% of companies say they require of their employees.
But 17% of employees say they never work on a company-provided device, using only personal or public computers for work instead.
More than two-thirds (69%) of security pros say they’re at least partly reactive in terms of security risk mitigation. That’s because they’re either pulled in too many directions (61%), don’t have the necessary budget (24%), or are understaffed (21%), among other reasons.
As a result, security teams are worried. When asked what keeps them up at night, 79% of security pros listed inadequate security protections. Among their top concerns: external threats like phishing or ransomware (36%), internal threats like shadow IT (36%), and human error (35%).
Phishing scams, ransomware attacks, and a patchwork system give our security team heartburn. They’re the tireless ninjas keeping the bad guys out, so next time you see them, offer a coffee (or a medal). We’re in this digital battle together.” – IT Security VP, tech hardware company
Understandably, productivity is top of mind for employees. Unsurprisingly, in the pursuit of productivity, security suffers. 54% admit to being lax about their company’s data security policies, with 24% of those saying they’re just trying to get things done quickly.
Despite the well-known vulnerabilities associated with weak or reused passwords, 61% of employees (64% of managers and 53% of non-managers) confess to poor password habits, which increase the risk of data breaches. And half of employees say they slipped up on security in the past year, for example by clicking a link in a suspicious email or sharing credentials for work with people outside the company, making companies more vulnerable to a cyberattack.
This is a scenario seemingly tailor-made for AI to deepen the tension between security and productivity. 57% of employees say using generative AI applications makes them more productive.
But a full 92% of security pros have security concerns about AI security, citing employees entering sensitive data into the tools, using AI systems that were trained with bad data, or falling for cybercriminals’ increasingly sophisticated phishing attempts powered by AI.
The delicate balance between productivity and security isn’t new, but the conditions leading to a potential breaking point are. While security teams are struggling to reduce the risk of cybersecurity incidents as workplace habits shift, employees are likewise singularly focused on the pursuit of productivity. Old concerns like the security of authentication methods haven’t gone anywhere, while new concerns only complicate matters.
We’ve only scratched the surface of this year’s report. Download 1Password’s State of Enterprise Security Report for the full breakdown.
Productivity and security are often in tension. Learn how today’s shifting landscape of hybrid work and AI has affected that tension, and how security professionals and workers are coping.
Download now
This is the second in a series of four posts about shadow IT, including how and why teams use unapproved apps and devices, and approaches for securely managing it. For a complete overview of the topics discussed in this series, download Managing the unmanageable: How shadow IT exists across every team – and how to wrangle it.
High productivity levels are generally a good thing. For most organizations, the answer to the question, “Is it important for your employees to be productive?” is a resounding “Yes!” However, when employees ask to use a tool or app to boost productivity, companies may want to say “yes”, but often find themselves saying “no”.
What gives? Security concerns. And they’re legit. Companies are in the midst of experiencing a brave new world called hybrid work. Gone are the days of on-premise servers, software, and devices (and employees) that were relatively straightforward to manage and secure.
Now knowledge workers can get things done in coffee shops and their own living rooms. Companies turn to cloud services to support flexible working with “access from anywhere” apps and online collaboration tools, collectively known as software-as-a-service (SaaS).
Employees have become much more likely to select these cloud services and apps (not all company-approved) to get their work done. While hybrid and remote work was slowly starting to become a thing before, the pandemic accelerated it, and here we are.
So the million-dollar question is: If employees want to use their preferred apps and tools to be more productive, how can companies leverage this employee productivity while still protecting themselves from cybersecurity risks?
And what does worker burnout (the opposite of employee productivity) have to do with the IT department’s security strategy for shadow IT?
The first post in this series, What is shadow IT and how do I manage it?, explains what shadow IT is and what it may look like across different company departments.
To recap, here’s a quick definition: Shadow IT refers to the apps and devices that aren’t licensed and managed by a company.
These aren’t obscure apps used for nefarious purposes. Examples of shadow IT can be anything from Google Docs to social media. The issue is that employees may enter company information or client data in them and, if they log in with a weak or reused password, it can cause vulnerabilities that may result in a data breach.
This new hybrid, cloud-based work environment and employee experience requires a shift in companies’ security strategy. There are no walls. Instead, security and IT teams are managing a nebulous perimeter that’s constantly shifting and often spans the globe. In The new perimeter: access management in a hybrid world, we highlight four key considerations for securing the new perimeter of a hybrid workforce:
Productive employees. Burned-out employees. At the opposite ends of the spectrum, yet both contribute to the risks of shadow IT at companies everywhere.
At one end, employees are using shadow IT to help them increase productivity levels or do their jobs better. A Gartner survey shows that we’re using twice the number of apps we did in 2019, and use continues growing.
At the other end of the spectrum are employees who are being stretched too thin. And it’s not a few outliers. A 1Password report on burnout revealed that 80% of office workers feel burned out, and one in three workers say burnout is affecting their initiative and motivation levels.
It’s worth noting that this research was conducted during the height of the pandemic, when we’d expect burnout levels to be particularly high – but it’s also worth noting that we haven’t solved burnout since then.
In addition to the obvious physical and mental health effects, worker burnout can present a severe, pervasive, and multifaceted cybersecurity risk. This is because employees who are feeling burned out can be more lax about following security protocols. They also are more likely to use shadow IT. Here are some additional eye-opening findings from the 1Password report:
Why is this so concerning? In addition to the important concerns about human health and employee well-being, burnout and resulting low levels of employee engagement negatively affects adherence to security protocols.
Bottom line? Nobody wins when an employee is burned out. When workers are so tuned out that they’re less likely to follow security rules, and more likely to use weak passwords or fall for phishing scams, it increases cybersecurity risks.
Adding complexity to the challenges of securing the new perimeter, it turns out (surprise!) that IT/security professionals aren’t superhuman. The 1Password report shows that they’re experiencing burnout in even greater numbers than the general employee population (84% vs. 80%).
While 89% of security professionals say they favor security over convenience, they also admit that they take shortcuts. For example, they use shadow IT (29%) or work around company policies to solve their own IT problems themselves (37%) or because they don’t like the company-approved software (15%).
Even more worrying, security professionals are twice as likely as other workers to say that due to burnout, they’re “completely checked out” and “doing the bare minimum at work” (10% vs. 5%).
That’s not good news, especially if a company has a reactive approach to managing shadow IT that depends on the vigilance of team members and their ability to quickly respond to problems.
As security professionals know, prevention is often more effective than protection. Taking a proactive approach to managing shadow IT – securely enabling it – is the only viable path forward.
It starts with understanding employee productivity, workflows, and potential security vulnerabilities in every department. A next step is working to secure the “path of least resistance” for all employees at the individual level so they can use the apps and tools they need to boost productivity.
The good news is, by securing credential sharing and standardizing how access to tools happens, you also protect your organization against lax security practices and behaviors.
Next, we’ll explore how to identify shadow IT, what it may be used for (such as project management, social media, productivity tools, and file sharing), and common vulnerabilities for different departments, including Finance, HR, Engineering, and Marketing.
To learn more, follow this series on the 1Password blog exploring shadow IT over the next few weeks or download the ebook: Managing the unmanageable: How shadow IT exists across every team – and how to wrangle it.
Learn why teams like Finance, Marketing, and HR use shadow IT, the security vulnerabilities that can follow, and how to manage it all.
Download now
1Password’s Go-to-Market (GTM) team is critical to achieving our mission of helping businesses, families, and individuals protect their passwords and other private information.
GTM helps our company understand the real-life problems that businesses are facing and how 1Password is best equipped to solve them. It’s a fast-growing team and we’re delighted that women like Jess Plowman, Senior Sales Development Representative, and Tiphanie Futu, Sales Enablement Manager, are playing such an integral role in its success.
Curious what it’s like to work in the GTM team at 1Password? Read on to learn about Jess and Tiphanie’s professional journeys, as well as their current role and day-to-day responsibilities.
Why did you join 1Password, and how did you end up here?
Back in 2022, I was made redundant from my previous role working as a sales development representative (SDR). I shared my experience on Linkedin and 1Password reached out to see if I would be interested in applying.
After doing my research, learning about the company’s values and meeting the team, I decided t it would be the perfect next step to develop my career. And I’ve never looked back.
What do you enjoy most about your role?
The highlights of my role involve speaking to a diverse range of people on a daily basis, learning about their needs for a password manager and how best I can assist them.
1Password’s culture focuses on development and progression, so I love helping with the onboarding process and watching my colleagues progress in the company and grow their skills. This focus on development and progression also helps me in my personal growth!
If you were interviewing for a role on your team at 1Password, what are your best words of advice?
First of all, I would 100% recommend it! We’re a friendly and welcoming team! The SDR role is a great way to get started in the cybersecurity industry, learn about sales and develop an in-depth knowledge of the product.
Remember to be yourself, be open to learning and ask lots of questions. The role is remote but you’ll never feel alone!
How would you describe your team in three words?
Supportive, hard working and fun!
Why did you join 1Password, and how did you end up here?
In 2022, I was impacted by a round of layoffs like many other people who work in tech. At that time, the company I had been working for helped us and shared our profiles on Linkedin.
The Director of Business Development at 1Password then reached out to me to see if I’d be interested in joining her team. The business development representative (BDR) team at 1Password was just forming and I loved the idea of participating in its conception.
What’s your current role, and what are your day-to-day responsibilities?
I currently work as a Sales Enablement Manager for the BDRs, SDRs and BDR growth. My role is to provide those teams with the tools, resources, training, and processes they need to effectively do their job.
My day-to-day responsibilities include onboarding new reps, creating and sharing content for the team to leverage, and meeting with sales leaders to identify underlying issues or challenges, and craft effective solutions to address them.
You’ve transitioned roles at 1Password. What was that journey like?
Transitioning roles at 1Password was a rewarding journey marked by support and encouragement from my colleagues and leadership team. I had been with the company for over a year and was eager to explore new opportunities for professional growth.
When I noticed an opening on the enablement team, I started a conversation with my manager, Brandon, who was incredibly supportive from the outset. The leadership team’s support throughout the application process was truly encouraging. Their guidance and mentorship helped me navigate the transition smoothly.
What do you enjoy most about your role?
I enjoy the opportunity to collaborate with various teams and gain insights into their unique perspectives. Working in this way allows me to understand their specific challenges and needs, enabling me to tailor my support to suit their needs better. This collaborative approach not only fosters stronger relationships across the organization but also allows me to continuously learn and grow professionally.
Who was an influential woman that made an impact on your career to date?
So many women have had an impact on my career. My cousin, Endji, who is about to become a doctor, and my little sister who created her own business, are the best examples of resilience and perseverance. My last manager, Diana, who is always sharing career guidance and advice, and all my friends who constantly encourage me in everything that I do.
Editor’s note: These interviews have been lightly edited for clarity and brevity.
Browse our current job openings to see if there’s an opportunity that matches your career goals.
View our open positions
This is the first in a series of four posts about shadow IT, including how and why teams use unapproved apps and devices, and approaches for securely managing it.
Whether or not you’re familiar with shadow IT, know this: it’s everywhere. Fighting it is like playing a game of whac-a-mole: Try to eliminate it and it will pop up again elsewhere.
So what’s IT and Security to do? A more realistic approach is to enable and secure it, so you can leverage the benefits of shadow IT without the security vulnerabilities it brings with it. Read on to find out how.
For a complete overview of the topics discussed in this series, download Managing the unmanageable: How shadow IT exists across every team – and how to wrangle it.
In this series, we’ll cover:
Traditionally, employees used the software applications provided and licensed by their company to do their work. IT and security teams were effective gatekeepers, securing and managing access with identity and access management (IAM) tools like single sign-on (SSO).
Today, there’s an app for… everything. Grammar checking apps. Language translation apps. And a whole new, emerging category of AI apps. The choices are many and they are compelling. In fact, in 2021, 1Password research revealed that more than 60% of respondents said they had created at least one account their IT department didn’t know about.
That’s shadow IT: any technology (usually a personal device or a cloud service) employees are using without the Security or IT department managing it – and sometimes not even knowing about it. You may think there’s not much shadow IT at your organization, but the reality is that it’s there, and you’ll find it across any number of teams. If Microsoft Word isn’t managed by IT, it’s shadow IT. Same if workers are using Google Docs for collaboration, or Dropbox for file sharing, or any other cloud service.
While employees adopting “unofficial” websites or apps may seem like no big deal to some, IT and security teams know that entering company information or client data on these websites and apps can cause vulnerabilities that may result in a data breach.
Why do employees use shadow IT? Why are they making security and IT’s job more difficult? First, most employees probably don’t realize the impact their actions have on security and IT.
Second, there are benefits to shadow IT. Use of shadow IT is not malicious. It’s about productivity, innovation, meeting deadlines, and doing good work. When the work pressure is high, employees look for tools to help. When someone’s on a tight deadline, security risk is often the last thing on their mind – especially if they’re feeling stressed or burned out (we’ll touch more on the security challenges of worker burnout in the next post).
So people will simply turn to the tools that help them get the job done.
What does shadow IT look like in the wild? There are countless examples of shadow IT, and use varies by team and role.
For instance, finance teams need to quickly share data with external partners like auditors, board members, or investors. HR teams commonly use external platforms for recruiting and hiring. And the marketing department wants apps to streamline tasks like customer relationship management (CRM), project management, and collaboration with external partners.
If there are no apps in the suite of company-managed tools with the functionality they’re looking for, workers will solve those inefficiencies themselves with shadow IT.
Survey says: Nearly three-quarters of North American companies have deployed single sign-on (SSO) tools. But despite that adoption, 30% of applications used by employees are not managed by the company.
Why? In addition to the plethora of apps at their disposal, hybrid work environments enable employees to split time between home and office. Some remote-first companies no longer even have office space, making bring-your-own-device (BYOD) even more common.
And when working from home, employees may be more relaxed about security risks, opting for the convenience of personal devices such as laptops or smartphones when accessing work emails and documents. One survey shows that 55% of employees say they use personally owned smartphones or laptops for their work at least some of the time.
Just like they find apps for personal use, many employees do the same when it comes to work – creating accounts for apps without going through IT, either because they aren’t thinking about security measures, or because they just want to get something done.
The uptick in app usage is huge: a Gartner survey shows that the average employee uses 2x more SaaS applications today than they did in 2019.
While single sign-on (SSO) tools are an important first step for securing access to enterprise tools, they fall short when it comes to managing shadow IT.
SSO can only secure access to apps the company or IT department knows about. Shadow IT, by definition, is a blind spot. This leaves critical gaps in a company’s identity and access management strategy. Those gaps are shadow IT.
There’s also a cost factor: it can be expensive for tools to be integrated and managed by an SSO vendor, with some software-as-a-service (SaaS) apps charging extra to be put behind SSO – a cost known as the SSO tax.
If SSO tools aren’t sufficient for managing security risks of shadow IT, what should companies do? Fight it? Try to stop shadow IT use? That’s unrealistic and unsustainable. The only viable path forward is to embrace it.
When nearly a third of applications used by employees aren’t being managed by their companies, it’s time to pause and figure out a better path forward.
You can’t realistically eliminate shadow IT. Therefore, the challenge is to enable and secure it so teams can access the tools they want to use, but in a secure way.
This can be achieved by making sure that each employee – on every team and across different data access points – has comprehensive protection. Approaching the issue at the individual level is important because shadow IT looks different for different roles and departments.
Where do you start? It’s most important to secure credential sharing and standardize how access to tools happens – so you can secure that access.
For example, for the finance team, access to things like bank accounts needs to be locked down – and they need secure methods for file sharing. For marketing teams that use and test apps like social media and messaging platforms, it’s critical to make sure only approved team members have the appropriate access to social profiles.
Applying the principle of least privilege (PoLP) can also help. That means making sure that employees have the minimum amount of access they need to do their jobs. For example, HR probably doesn’t need access to marketing analytics or campaign spend details.
It’s up to IT and security to figure out how to secure and enable these systems. 1Password can help. 1Password is an enterprise password manager (EPM) that provides teams with a centralized solution to use, access, and share critical company data with role-based access controls and ensures employees adhere to your security policies. EPMs can help you make the easy way to work the secure way to work.
Shadow IT is here to stay. It will likely continue growing, especially as new cloud services like generative AI garner wider use. And as it does, if left unchecked, it can increase your company’s attack surface, expose sensitive data (sometimes inadvertently), and increase the risk of a data breach.
In other words, no cybersecurity plan is complete without addressing shadow IT.
In the coming weeks, we’ll explore shadow IT in more depth here on the 1Password blog, including how to do more with less with valuable IT resources. In the meantime, you can learn how to manage shadow IT, shore up your data security, and protect your company against cyberattacks by downloading Managing the unmanageable: How shadow IT exists across every team – and how to wrangle it.
Learn why teams like Finance, Marketing, and HR use shadow IT, the security vulnerabilities that can follow, and how to manage it all.
Download now
When daydreaming about the future, it’s fun to imagine faraway, fantastic, and possibly impossible scenarios. Moving sidewalks. Personal jetpacks. Unconfusing TV remotes.
But to make the world a better place, we need to balance small improvements with audacious moonshots. As science fiction novelist William Gibson famously put it: “The future is already here — it’s just not evenly distributed yet.”
A good illustration of that quote can be found in Estonia, where citizens have been using digital identification to vote and access public services for over a decade. Estonia is living in the not-too-distant future, waiting for the rest of us to throw away our laminated ID cards.
Delivering these kinds of improvements is easier said than done. The paradox of working at a technology company is that you need to build small but innovative products and features (the future) with tried-and-true approaches (the past). Tight deadlines often discourage experimentation but, in order to stay competitive, it’s important to revisit your processes on a regular basis. In other words, how you build can be as critical as what you build.
The good news, according to William Gibson, is that there are plenty of new ideas out there. You just have to know where to look.
In that spirit, the content design team at 1Password has been trying out an approach called concept-first design.
You might be familiar with content-first design — letting the key pieces of communication (i.e. content) between the system and the user determine the shape and flow of the experience. Concept-first design, meanwhile, is a way to make sure that users won’t find those key pieces of communication confusing. Concept-first design helps simplify complex product ideas earlier in the process. This makes it easier to translate those key ideas into language and UX that users will recognize, understand, and adopt.
That’s a lot to unpack, so the rest of this article will explore concept-first design in theory and practice, the benefits of using it, and how we’re starting to apply it at 1Password.
So what, exactly, is concept-first design? Let’s use Hipmunk, the late, great, travel site, as an example.
Hipmunk wanted to help users pick the best flight based on factors like the number of connections and the airline’s on-time performance. Instead of a convoluted bar graph that put the burden of interpretation on the user, Hipmunk created an Agony index. Which is almost exactly what it sounds like. Finding the right balance of pain and price in order to minimize agony was an easy-to-grasp concept for anyone who’d endured a terrible flight to save some money.
Now, in order to do concept-first design well, you’ll need to start by shifting your perspective.
As Elizabeth McGuane, a UX director at Shopify points out, concept work requires swapping Figma and Adobe Photoshop (at least initially) for design tools like metaphor and narrative. In her recent book Design by Definition, McGuane notes that “every digital product starts out as a problem to be solved. The idea, or concept, is the way we meet that problem – the premise of our solution.”
Instead of immediately pushing pixels around, McGuane challenges product designers to brainstorm a bunch of metaphors by asking:
As McGuane notes, “metaphors bring the abstractions of software closer to life, making interfaces feel real.” If you keep something real and relatable in mind while you’re designing software, there’s a greater chance the user will grasp the final concept and find it intuitive. (As a security company, 1Password has found the padlock to be a particularly useful bit of inspiration).
Starting with the core idea of a feature – the concept – is a way to get everyone in your company on the same page. This, in turn, allows your product and content design teams to work more effectively in parallel. A shared language gives your team a shared understanding of what you’re building and, as a nifty bonus, it makes it easier to name things too.
Speaking of which, McGuane has an entire chapter about naming in her book, which reinforces how important it is to product work. As she points out, endless arguments about product or feature names are usually due to a hazily-defined concept. Naming is hard but tech companies often make it much, much harder by starting with weak or confusing concepts.
That’s why 1Password’s content design team, with help from product marketing, has been working on different ways to improve the name game. This includes team-wide Mad Libs exercises, where we test out potential names and concepts in realistic situations. We’ve also conducted UX research sessions where we ask customers to explain what potential names mean to them.
Without giving away any top-secret information, 1Password plans to expand our offerings in 2024. That’s why, in the spring of last year, senior content designer Chantelle Sukhu and I gave a talk at a product manager meeting about how content design can improve the stuff that 1Password builds and ships.
As our offerings expand, it’s even more important to think carefully about concepts, complexity and clarity. To make sure everyone on the call understood the worst case scenario, Chantelle and I shared an example of concepts gone rogue:
“The Zoom Rooms Controller app provides an ideal way to manage a Zoom Room meeting without having to interact with the in-room Zoom Rooms Controller.”
That’s not an excerpt from an unpublished Dr. Seuss book. It’s actual help content on the Zoom support page. Now, to be fair to Zoom, many other companies find themselves in similar situations when product concepts aren’t thought through and carefully managed. The result of this chaos? The user is forced to learn, understand, and memorize a series of unclear concepts.
We noted during our talk that successful content design is often invisible. But users definitely notice intricate error messages, inconsistent labels, and confusing products that require complex instructions.
Successful content design is often invisible.
Along with helping product managers avoid Zoom doom and gloom, the content design team at 1Password has been working to identify and eliminate unnecessary concepts.
In the same way a product can accumulate technical debt, it can also suffer from conceptual debt. As McGuane notes in her book: “Technology companies are machines for meaning.” And too much meaning is as bad as too little. Making our products and features less confusing demonstrates user empathy and makes it easier for everyone at 1Password to do their best work.
The first step of this digital spring cleaning has involved concept mapping. This is a way to visually capture the key aspects of 1Password and the interconnections between them. Creating a concept map for 1Password has helped us see the bigger picture and made it easier to integrate passkey options and identify improvements for how users sign in to our app. It’s also yet another way to create products that feel more consistent and easier to start using right away.
For all the value they bring, identifying and debating concepts can be tricky.
To make our naming and mapping work more tangible, 1Password content designer Grace O’Neil created ConceptMania: a single elimination tournament bracket for ideas. Working in groups, the goal was to determine the clearest concept in 1Password. The exercise sparked a lot of discussion about what makes winning concepts like “subscriptions” and “tags” easy to understand and communicate to users.
ConceptMania was fun and useful, especially because it reminded the team about mental models: a tool our brain uses to handle complexity. A mental model is a representation of how something works based on our real-world experiences. Since users bring their mental models into 1Password, our concepts need to reflect and build on those mental models.
As usability pioneer Jakob Nielsen famously put it: “People spend most of their time using digital products other than yours. Users’ experiences with those other products set their expectations.”
That’s why, a few months after ConceptMania, our design team published competitive audit guidelines. A competitive audit is a systematic look at direct and indirect competitors. It’s a way for us to spend time with products other than 1Password to better understand common concepts. And by thoroughly exploring the problem space, we can avoid being insular in our thinking and instead rely on concepts that our users are already familiar with.
Concept-first design doesn’t solve every product problem — nor is it meant to. But it’s a fantastic way to make the often invisible work of content design impossible to ignore.
Defining, describing, and solving core product problems with a conceptual framework creates stronger connections and a clear sense of purpose between content designers, UX researchers, and product designers. And concept-first design helps avoid, or at least minimize, tricky debates about naming — which in turn reduces the content design agony index.
And, even more importantly, designing with clear, thoughtful concepts leads to products that are easier for users to grasp and enjoy.
Author’s note: This blog post is based on a talk I gave at 1Password’s 2023 Product and Design offsite.
It’s essential during Women’s History Month to recognize the strides women have made in various fields. However, networking remains one area of career advancement and satisfaction where women often face unique challenges. From battling imposter syndrome to navigating male-dominated spaces, women encounter obstacles that can hinder their networking efforts.
If you’re struggling or unsure how to grow your professional network, fear not! In this blog post, we’ll address common fears and challenges that women often face while networking, and give you some strategies to overcome them. We’ll also explain the importance of shamelessly networking and cultivating meaningful connections.
Networking takes a lot of confidence. It’s natural to feel nervous about introducing yourself to new people and building real, meaningful connections. Here are some specific fears that you might have about networking, and some tried-and-true solutions:
Many women struggle with the feeling that they don’t belong in professional settings, which leads to self-doubt and hesitation in networking. You can combat imposter syndrome by acknowledging your achievements and embracing your unique skills and experiences. (If you haven’t done so already, create a new note on your PC or phone to track your accomplishments!)
Remember, you’ve earned your place at the table.
It’s natural to fear rejection when reaching out to new contacts. However, don’t let the fear of “no” hold you back. View each interaction as an opportunity for growth, and remember that rejection is not a reflection of your worth. The other person simply may not have the time to develop a new professional relationship at the moment.
Keep persevering, and you’ll find the right connections.
If you’re unfamiliar with a specific industry or new to a workplace, it can be intimidating to navigate conversations. You might be thinking: “What happens if I run out of things to say?”
It’s okay not to be an expert and simply asking for advice on a particular topic can be an incredible tool for opening up meaningful conversation.
In industries traditionally dominated by men, women may feel out of place or overlooked. Instead of shrinking into the background, assert yourself confidently. Your voice and perspective are valuable assets, so speak up and make your presence known.
Shameless networking is about being bold and owning the fact that you want to meet new people and find opportunities for professional growth. Embrace the power of networking events, conferences, and online platforms to connect with like-minded individuals. Don’t be afraid to initiate conversations, share your accomplishments, and express your career goals.
Remember to look for ways to offer value to your connections, whether it’s through sharing insights, providing referrals, or offering assistance in their projects. By putting yourself out there unapologetically, you’ll increase the chance of finding valuable opportunities and advancing your career trajectory.
Building a strong professional network isn’t just about collecting business cards or LinkedIn connections – it’s about fostering genuine relationships based on trust and mutual support.
Approaching networking in this way will also increase your overall career satisfaction. How? Creating genuine connections will also give you access to more resources, support and opportunities that enhance professional life.
Here are some tips for cultivating meaningful connections:
Be authentic: Authenticity breeds trust and rapport. Share your passions, interests, and goals genuinely, and seek connections who align with your values.
Offer value: Networking is a two-way street. Be proactive in offering assistance, advice, and resources to your connections. By adding value to others, you’ll strengthen your relationships and build a reputation as a valuable ally.
Follow up: Don’t let your connections fade into obscurity after the initial encounter. Follow up with personalized messages, schedule coffee meetings or virtual catch-ups, and stay engaged with your network regularly.
As women continue to shatter glass ceilings, networking remains a powerful tool for career advancement, professional success and overall satisfaction. By overcoming common fears and challenges, shamelessly promoting oneself, and cultivating meaningful connections, you can build a robust network that supports your aspirations and helps you thrive in any industry.
This Women’s History Month, let’s celebrate the resilience and tenacity of women in networking and champion each other’s success. Consider reaching out to one new person today, whether that’s in person or on a platform like LinkedIn. It’s guaranteed to make their day!
Together, we can create a more inclusive and supportive professional landscape for generations to come.
Joined by the popular Mac Admins podcast cast, we dive into Apple security and privacy, and how Macs are being integrated into workplaces everywhere. Find out whether an Apple product on its own keeps you secure and safe from viruses, or if you need additional security apps to protect your devices.
Michael “Roo” Fey, Head of User Lifecycle & Growth at 1Password chats with Tom Bridge, Marcus Ransom, and Charles Edge – three of the rotating cast of Apple expert hosts and consultants – on the Random but Memorable podcast. To learn more, read the interview highlights below or listen to the full podcast episode.
Editor’s note: This interview has been lightly edited for clarity and brevity. The views and opinions expressed by the interviewee don’t represent the opinions of 1Password.
Michael Fey: A lot of people believe that buying an Apple product or a device keeps them secure and safe from viruses, is that true?
Charles Edge: No. The first viruses were written – or the first viruses for personal computers at least – were written for the Mac, so I don’t think it was ever true.
Having said that, I do think Apple makes a lot of privacy and security decisions on our behalf out of the box that make the platform very secure, comparably. That’s not to say I don’t think third-party products have a place. Take 1Password as an example. Keychain’s awesome. 1Password has all these things that make it even better. And the same can be said for endpoint detection and response solutions (EDR).
“Apple makes a lot of privacy and security decisions on our behalf out of the box that make the platform very secure."
Tom Bridge: I don’t think that there’s a ton of need to go out and invest in EDR like a Carbon Black or a CrowdStrike for your personal individual machine. I don’t think that that’s a great use of money or time.
But there are some common-sense things that you can do to protect yourself. Some of the more consumer-friendly solutions are a good option. But business needs are a little bit different than say, an individual focus.
Marcus Ransom: The other way I like to look at it is, the computer itself is pretty safe. It’s a pretty robust platform. As Charles mentioned, Apple has done an awesome job of building something that has a level of protection and privacy and makes it really hard for third-party threat actors.
But one of the biggest problems is the person using the computer and their behavior. Once again, Apple has done a really awesome job of trying to encourage and promote good behavior, but there are still plenty of things you can get absolutely wrong if you’re not mindful of what you’re doing.
“One of the biggest problems is the person using the computer and their behavior."
It’s quite amazing to see what sort of paths people attacking Mac users will use compared to the typical Windows virus, which is a whole different kettle of fish.
MF: Apple consistently adds new security features and new privacy features to their products. What has recently come out from Apple that has got you excited as admins or changed the way that you do device management?
CE: Passkeys. We can start there since we’re on a podcast from a company that supports them!
TB: Passkeys and iCloud Keychain. As we pivot into the business for a second, the ability to put those in a managed Apple ID keychain is absolutely right.
Then we go one step further: being able to tie the authentication of your managed Apple ID to an external identity provider that isn’t just Google or Microsoft. That could be a JumpCloud, an Okta, or anybody else along those lines.
That’s a huge step forward for a lot of business organizations in terms of making managed Apple IDs more approachable, more familiar, more comfortable for the average end user. So that they can know: “Hey, look, I don’t have to remember a different password. I don’t have to get out an SMS-capable device to complete authentication.” To be able to do it the same way that I normally authenticate to do any of my other business tasks is so crucial.
I’m really excited to see Apple moving in that direction and supporting that kind of managed Apple ID federation.
CE: Some of these things are not things that users are even asking for. As an example, just last week, Apple introduced post-quantum encryption (PQ3) for iMessage. Now it’s like: “Oh, you don’t even need Signal or one of the other apps in order to have that same level of encryption to protect data, whether it’s at rest or in transit, on that device.
TB: While the texts I exchange with my friends aren’t something that I’m worried about, the fact that any messages I send are safe from quantum cryptography attacks… that’s a real good feeling. And it wasn’t something that I sought out to ask from Apple, but boy, are they out there looking out for the people that use their platforms in ways that other companies just aren’t.
MR: One of the things that I really love is Apple’s idea of containerization. On your personal device, you can have your work applications, but rather than having a portal that you go into for work or a different account that you sign into, the apps are all there, on your phone. If you use a work app, the company has responsibility for that work and can see what’s going on in there. If you’re using personal apps on the same phone, work can’t see it.
One of the details I really love is that they won’t even know the serial number of your device because that serial number can be used for narrowing down who you are or identifying you. The idea is making things secure for an organization and doing a really good job being able to prevent copy and paste and clipboard between personal and work – but at the same time giving the user privacy.
I remember back to the early days of MDM (mobile device management) when, if a personal device was enrolled in MDM, you were able to see what’s on it, like what apps they have installed in an iPad. From that, you could draw conclusions about a person.
Not having that available any more is really refreshing. We see so many organizations saying, “Oh, we need to be able to geolocate all of our users wherever they are.” Most of these ideas come from a good place. They’re thinking about the value that they can have.
“If a personal device was enrolled in MDM, you were able to see what’s on it. Not having that available any more is really refreshing."
But then you think about what happens if somebody with either bad intentions or sloppy digital hygiene gets access to that information. The next thing you know, your company is in the news! And as a user, something very personal of yours is now public, and you can’t walk that back.
I love the way Apple makes decisions on behalf of Mac admins, about what they can and can’t do, really, to protect us from ourselves in a way.
MF: What do you think is the perception of Apple devices in corporate environments these days? Do you see it shifting? There was a time where Apple was pushing out ad campaigns like, oh, you can do that on a Mac, too, like Microsoft Office and things like that. But obviously, there’s a lot more than just running Office to bring a Mac into a corporate environment.
TB: I see it shifting and that it’s shifted a lot over the last five years. If we think about how businesses have traditionally seen Apple – in the “before times” and the “long ago” – we certainly saw Apple devices as “less than”. A lot of corporate IT departments were like: “Oh, that one Mac over there, I was made to support it by my evil boss.”
If you want to put one person’s name out there – and I don’t like putting one person’s name because there was a whole team that was working with this person – but go look at Fletcher Previn. He was at one point CIO of IBM, and he’s now, SVP and CIO of Cisco. If you look at the programs that he helped build, he basically said: “Hey, it’s okay to use a Mac at work. If you want to use a Mac, you should be able to.”
That approach has paid such dividends for IBM, Cisco, and other organizations throughout the Fortune 500. Now there isn’t anybody any more without some plan for supporting Macs in the enterprise.
CE: The one thing I would add is that I do see an almost overcorrection in some organizations. They equate the Mac with the “digital transformation” buzzword. They’re like: “Well, if we allow a thousand Macs here, then we have completed the digital transformation.”
In my experience, digital transformation is about things like automation, cost-cutting, and getting to market faster with new product development. Just allowing a Mac and treating it like Windows is not synonymous with digital transformation unless you’re looking to also automate things and get things to market faster.
MF: Let’s talk about the cybersecurity landscape, which is constantly evolving. How do you stay informed about emerging threats and vulnerabilities that are specific to Apple products? What steps can admins and users take to stay ahead of these potential security risks?
CE: I can speak to what I do. I watch every video from Objective by the Sea (Mac security conference). It’s a wonderful conference that talks in depth – it might be too in-depth for the average user. I also typically look for everything about iOS, Mac, iPad, vision OS, passkeys even, that pop up at DEF CON and Black Hat conferences. Again, that’s pretty deep for regular people who are just trying to protect their machine at home.
TB: Well, I’m a little bit of an outlier too because my next-door neighbor is one of the program managers for CISA, which is the cybersecurity and infrastructure security agency here in Washington DC. I just go across the fence and ask Dave what happened!
But really what I do is I read a lot of things. I will call out Objective See Foundation. As Charles mentioned, they have a conference, but Patrick Wardle also has a Patreon and a blog, and that’s a great place to go look. I love the threat labs research topics from the folks at Jamf, and from Kandji.
And Malwarebytes. They’re doing great work out there, and that is a great place to go see what the cutting edge of threats is. I also want to caution you, if you read all this and you get scared, take a deep breath. It’s going to be okay. A lot of it’s theoretical.
CE: Or been addressed in a point release or a security update.
TB: The number one thing that anybody can do to protect their own security is keep their machine up to date. Period. Full stop. Apple patches the latest version of the operating system for all of the security bugs. And keep your third-party software up to date too. I know that it’s fun to click the box that says “not now” or “ask me again tomorrow”, but don’t get in the habit of doing that for three and a half years!
“The number one thing that anybody can do to protect their own security is keep their machine up to date. Period."
CE: Don’t enable sharing. Read the dialogue boxes. Ask questions like, “Why do you want access to my Camera Roll?”
MR: There’s also some basic digital hygiene as well. There’s this great auto login functionality in macOS, so when you turn on your machine, it just logs in, which is a great convenience. Unfortunately, it’s also a really good way to give somebody else access to what’s on your machine if they have physical access to that machine. So use a good password manager. Use passkeys when you can.
CE: Don’t reuse the same password.
MF: Where can folks go to find out more about you?
TB: You can find the podcast at podcast.macadmins.org. You can join us in a 65,000-person-strong Slack for people who manage Apple devices at scale. Check that out, read the code of conduct. We really like to keep it a safe place for people to participate and to be themselves, so please give that a look and come join us.
Listen to the latest news, tips and advice to level up your security game, as well as guest interviews with leaders from the security community.
Subscribe to our podcast
We recently introduced labs, a new and pioneering space in the 1Password apps that lets customers opt in to test experimental features.
For us, innovation isn’t just a buzzword – it’s a big focus for all of our teams. We are always looking for ways to evolve 1Password so we can offer a leading-edge experience in both security and convenience.
As the only password manager involving our customers in the early stages of development, we are breaking new ground in creating a truly human-centric experience. With customer feedback helping us shape experimental features before we commit to bringing them to all 1Password customers, every new addition to labs is actually tailored to real-life use-cases.
By testing exciting, new features through labs, but also continuing to focus on making 1Password more user-friendly and intuitive, we’ve been able to balance innovative additions to 1Password while also improving existing features and functionality of our apps.
Since labs was launched, we’ve been busy sharing new experiments and using customer feedback to improve those features and officially add them to 1Password for everyone to use.
Here’s a breakdown of what we’ve been working on with the help of our customers.
Default details for a smoother autofill experience
The first experimental feature introduced to labs was the ability to set default details. Given the positive feedback received from customers, our teams iterated on this feature, made improvements, and shipped it to all customers under a new “Profile” tab in the 1Password apps.
By setting default details, you can select your preferred payment card and identity item, which includes things like your name, address, email, and phone number. Your chosen selections always take precedence in the list of options the next time you need to autofill any of that information. This can be set for each of the 1Password accounts you are signed in to, so if you have a work and personal account, you can set your default details for each of them.
Next time you’re filling out online forms or making online purchases, you can enjoy a seamless and improved autofill experience, ultimately saving valuable time and simplifying digital interactions.
Custom browsers for more flexibility and control
If you’re using 1Password on macOS and opt for different browsers, like Orion or Wavebox, you can now authorize 1Password to connect to those browsers and improve the functionalities of the 1Password browser extension. This brings significant improvements, such as letting you to unlock the 1Password browser extension with Touch ID in those browsers.
This is a significant step toward providing greater autonomy and flexibility in browser selection, streamlining workflows, and enhancing your experience – and it lets more people than ever experience all the benefits of 1Password.
Nearby items for convenience on the go
With nearby items, you can assign a location to any of your 1Password items. Then, on the 1Password mobile apps, a new dedicated section on the home tab will display when those items are physically close to you.
Imagine having quick access to essential information based on your location – whether it’s the door code at your workplace or the combination to your storage shed. With people becoming increasingly mobile, this feature aims to provide tailored convenience wherever you go.
The 1Password community was very engaged with this feature and shared a huge amount of feedback that we were able to implement. For example, some use cases from the community include: office Wi-Fi passwords, gym locker PIN codes, garage door or gate access codes, debit card PIN codes showing near ATMs, health or benefits insurance for when you’re at the dentist or doctor, and membership cards at specific branches (such as library cards, gym cards, etc.)
New vault view in 1Password.com for consistency across platforms
Our desktop application is the best way to manage your items in 1Password. This update not only aligns the design of 1Password.com’s vault item view with our main desktop application, but also enhances our ability to consistently introduce new features across all platforms.
The current version offers read-only functionality, serving as an early testing phase to identify potential issues. However, over the next few months, 1Password will gradually introduce full functionality that aligns with the current web interface as we continue testing and development.
Unlike other experimental features in labs, this update doesn’t require manual activation and won’t appear under the “Labs” tab in the 1Password apps. Instead, you can access it directly via an in-app banner within the vault item view on 1Password.com
Beta: Auto-type for Windows for simplified logins
Auto-Type via Quick Access on Windows simplifies the login process for you. By enabling this feature on labs through the beta build, you can quickly fill and submit your login credentials into various applications and forms using a simple shortcut (Ctrl+Shift+Space).
Once activated, it automatically types the username and password into the respective fields, enhancing efficiency and saving time. Additionally, for logins with two-factor authentication (2FA), the one-time code is conveniently copied to the clipboard for easy pasting. While not a substitute for Universal Autofill on Mac, Auto-type via Quick Access provides a similar streamlined experience, offering you a seamless way to access your accounts across different platforms and applications.
All the experimental features in labs are turned off by default, which means you’ll have to opt in for each experiment you’d like to try out, giving you full control over the experience. In the 1Password mobile and desktop apps under Settings, you’ll find a Labs tab. Select Labs, and you’ll see a list of all available experimental features. From there, you can easily toggle each feature on or off at any time.
We track the performance of each experimental feature by:
If an experimental feature has enough positive feedback, the feature will progress through the beta 1Password apps and eventually be officially released into all 1Password apps.
We’re not just committed to continuously enhancing the 1Password experience – we also want to transform the way people manage the tension between security and convenience by making the secure thing the easy thing.
With support from inventive initiatives like labs and customers like you, we’re well on our way – and we’re just getting started. Stay tuned for more ways we plan to shake up password management and reshape online security.
Keep all of your accounts secure with 1Password, the world’s most-trusted password manager. Get started today with a free 14-day trial.
Try free for 14 days
We’re thrilled to announce the availability of a partner rebate incentive for partners of 1Password. As valued members of our partner ecosystem, you play a pivotal role in our collective growth journey.
With this program, we aim to deepen our partnership, drive mutual prosperity, and unlock new opportunities together.
Partner rebate programs are not just about offering financial incentives – they’re about fostering stronger relationships, driving collaborative growth, and rewarding your dedication and efforts. By participating in our rebate program, you gain access to benefits designed to amplify your success:
Increased earnings potential. Earn attractive rebates on your performance by achieving sales targets, expanding market reach, and driving customer engagement.
Alignment of interests. The rebate program is designed to align with your business objectives, making sure that our mutual interests are in sync and driving toward shared success.
Recognition and appreciation. Your dedication and contribution to our partnership do not go unnoticed, and the rebate program is a way for us to recognize and appreciate your hard work and commitment.
How to get started:
Review program details. Familiarize yourself with the program details including eligibility criteria, reward structures, and performance metrics.
Promote and sell. Leverage partner resources to support your ability to promote and sell our product. Use marketing materials, training, and enablement tools to maximize your effectiveness.
Claim your rewards. Once you’ve met the program requirements, our team will ensure a seamless payout process.
We’re excited to embark on this journey together and empower you to reach new heights of success and unlock boundless possibilities together.
Learn more in the 1Password Partner Portal or sign up to be a 1Password Channel Partner today.
If you think security is all about risk management, cybersecurity expert Greg van der Gaast thinks you’ve got it all wrong.
Van der Gaast – chief information security officer (CISO), consultant, author, world-famous former hacker and undercover agent – talked with Michael “Roo” Fey, Head of User Lifecycle & Growth at 1Password, on the Random but Memorable podcast about why taking a different approach, especially in a world of increasing security incidents and ballooning budgets, can be a much more effective strategy to reduce both vulnerabilities and cost.
What’s different in Van der Gaast’s approach? It has a lot to do with focusing on quality and process before risk. And repeatedly asking “why” to get at the root of upstream security issues. Read on for the interview highlights, or listen to the full podcast episode.
Editor’s note: This interview has been lightly edited for clarity and brevity. The views and opinions expressed by the interviewee don’t represent the opinions of 1Password.
Michael Fey: What was your journey from undercover hacker for the FBI and Department of Defense (DOD) to cybersecurity consulting?
Greg Van Der Gaast: As a teenager growing up in Holland, I started learning about operating systems and how stuff worked and how you could break stuff and make it do other stuff.
I semi-accidentally hacked into a nuclear weapons facility somewhere overseas – I think three or four weeks after they set up five atomic bombs on the ground. It was quite a hot topic at the time. I just realized what it was, downloaded a bunch of research, and next thing I know – I had moved to the States at this point – it’s like CIA, DIA (Defense Intelligence Agency), and FBI.
I had four suits show up at the door. The first one said he was from the DOD. I invited him in and I told him, “Look, I was living in Holland at the time. This was somewhere over in Asia. I don’t think I’ve broken any U.S. laws. I was worried you guys were from Immigration.”
That’s when suit number four raised his hand and said, “I’m from Immigration.” I was put into the back of a van, taken to a detention facility where, to this day, I still don’t know where it was.
About a week later, more suits came and made me a job offer I couldn’t refuse. I spent the next three years working undercover, getting paid cash by federal agents in underground car parks.
What I do now is at the extreme end of the strategic- and business-focused leadership and root cause big-picture stuff that isn’t on the radar of most security people. Part of my challenge has been that no one’s asking for this because no one knows that these are even valid approaches.
“What I do now is root cause big-picture stuff that isn’t on the radar of most security people."
I think my overall mentality is more one of a problem-solver rather than just a “security person”. I started looking at the bigger picture of, “Well, we’re deploying all these firewalls and database encryption and intrusion prevention systems, all the latest stuff. Are we actually protecting the business? Do we know what data is where and who’s doing what and how these business processes work and what data I should even be seeing over this network that I’m monitoring?”
It dawned on me that everybody else hated process. But I realized that if we’re not consistently implementing, configuring, monitoring, managing any of this stuff, how are we sure we’re not missing anything? I started focusing on “how do I make sure I mature the technical tooling?” I started realizing a lot of these things wouldn’t happen if it weren’t for some root causes elsewhere.
So why don’t we focus more on our IT maturity rather than spending an absolute fortune on security operations? If those decisions are made by other departments, how do I influence change and create a program?
Learning about the business language and about business took me down this journey where security is about risk management – and I think that is completely backwards. We should be like most other industries, like in manufacturing, aviation, oil and gas, healthcare – it’s about quality management.
It’s about having really high quality processes so you don’t have defects that cause issues. We don’t do that. We don’t go upstream. We don’t go holistic. We just constantly detect and respond to the defects being exploited.
That’s brought about this really different approach to security that’s process focused, not about doing IT security, but about going through your business processes and making sure they’re secure.
In other words, stop doing cybersecurity, start doing business securely.
MF: You mentioned other industries that are heavily process oriented and quality focused. When I think about that from a software point of view, my brain says: “We can’t just build a resilient widget because the platform on which that widget rests is constantly shifting."
It’s an apples-to-oranges comparison. Am I thinking about this the wrong way?
GVDG: Every industry – aviation, automotive, transport, oil and gas – focuses on quality management. They focus on addressing the root causes behind problems. I see incident responses like: “Oh, the root cause was that this got exploited.” Why was that exploitable?
“Oh, well, because so-and-so did X.” Okay, why did so-and-so do X? “Oh, because they had this.” But why did they do X without considering the downstream impact? “Oh, because there’s no awareness.” Why? Why? Why?
Toyota has a “five why” system to find a root cause. They ask “why” five times and go five levels deeper. When you address those things, you get this downward curve of issues or defects over time.
If you think about security vulnerabilities, they’re actually quality defects in code, in the configuration of a system, and how a system is built for users. But there is a point at which there is a diminishing return in resolving the root cause of this thing that caused 50% of our issues. Or the second thing that was 30% of our issues. You end up with this level of residual activity where it’s just not worth it to fix the root cause because it’s too expensive or happens so rarely that it’s just not cost-effective.
That is the point at which risk management should start. Because if you look at the number of most vulnerabilities out there today – I’m going to say 98% of them are known defects.
“If you look at the number of most vulnerabilities out there today – I’m going to say 98% of them are known defects."
If we started doing those things, we would reduce our exposure by probably one or two orders of magnitude. That’s really significant. That’s what I’m getting at because we’re at a point where instead of having that downward curve in security, every year we spend more money and every year we have more incidents.
We see new applications all the time that have four-, five-, six-year-old Common Vulnerabilities and Exposures (CVEs) in them. If someone’s using six-year-old code to build a new platform, that’s a process issue. We know how to fix this, but we don’t.
Only once you’ve done all that, should we risk manage residual issues. But we’re not doing the big picture. Very few people in security are bringing that total business lifecycle so that management appreciates the real cost. The reaction I get from CISOs and security leaders usually falls into two camps.
The first is: “Yeah, I get it but please shut up because I like my job security.” We don’t want to fix the problem because it threatens our employment! Really, if you’re the one fixing the problem, you become far more valuable. This is how I’ve grown in my career – not by creating more problems to keep me busier, but by learning to fix bigger problems that create value.
The second reaction, which is quite common, is a problem of structure. It’s: “Yes, Greg, I understand that these process issues somewhere upstream are causing me all this work and it’s costing the business all this money to mitigate and remediate, but I don’t own those things. I don’t own the IT department. I don’t own the engineering function. I don’t own the fact that the salespeople put contract data in this platform. So I can’t do any of that.”
And I think that’s a truth. But now that you’ve identified the problem, you need to influence processes somewhere else to create a structure where you can drive change even though you don’t own it.
Every security issue is a quality issue, but not every quality issue is a security issue – but the root causes can be the same. So, if I fix whatever causes my engineering teams to produce a lot of vulnerabilities in what they develop, quite often I end up with cleaner code. It runs faster, it’s more stable, my customers are happier.
“Every security issue is a quality issue, but not every quality issue is a security issue – but the root causes can be the same."
My AWS compute costs go down dramatically. And you end up saving the business a lot of money because you’re making quality enhancements that go beyond just removing vulnerabilities. They remove other defects, they improve performance, they improve reliability. There’s a lot of benefits and they’re all cumulative and sustainable.
MF: It sounds like you’ve met some resistance when spreading your thesis out in the world. Can you talk about the differences between the companies that are very receptive to this type of approach versus the ones that aren’t?
GVDG: There’s a real lack of accountability in security. There’s a lot of elitism. We’ve all sat in the room with security people badmouthing the users and the business, like: “Oh, management won’t give us money.” But we’re all very confident about how important we are.
But if I go up to your head of InfoSec, who is asking for $2 million of security spending, and I ask them, “Will there be a positive return on investment for the business?” They’re absolutely adamant: “Oh yes, yes, we’ll definitely save money this way.” I’m like: “OK, how about you pay for it yourself and then you get to keep all the ROI?” All of a sudden, no one’s very sure anymore.
I often say that security in many ways is the best job in the world because no one really understands what you’re supposed to be doing. No one knows whether you’re actually doing it. And, if you screw up bad enough, they triple your budget!
When I would go into a place as an auditor or a consultant, especially as a consultant, where you’re really trying to help them, they would get very upset at you.
They don’t like you criticizing or pointing things out that they didn’t think of. It’s very much like you’re calling my baby ugly. It gets hostile very quickly.
But, if you put the same group of people in a room and you’re not talking about their business specifically – you start explaining the concepts – then they just kind of light up and say: “This makes a lot of sense.”
They’re very keen to go into work the next day and start applying the principles because you haven’t insulted them directly. You’ve given them an idea, an approach that they can implement, take credit for, and then they’re all too happy to do it. But the direct approach tends to be very, very difficult.
MF: It’s easier to say “well, we mitigated these 47 vulnerabilities this year” than it is to say “nothing happened this year again. We’re all set.” How do you change the conversation to: “This is how you should start advocating for changes so that people can see the value. Because if you don’t, the bottom line is going to win out over everything else."
GVDG: I think risk quantification is quite interesting but also pointless. Because OK, we removed 47 vulnerabilities, but what is the actual value of those vulnerabilities? Risk management calculations are – even the quantitative ones – extremely arbitrary.
And the next thing you know, it’s like, “Well, yeah, you removed vulnerabilities, but it’s actually running on a hypervisor running Windows 2008.”
Everything you’ve done can be circumvented like that. So how can you stand behind that value? I think risk quantification as a whole is very tricky because there’s no way of saying what those risks actually cost or whether they would’ve been exploited or not.
One of the points I like to make sometimes is this. You say: “We’ve done all the calculations and we’ve got an annualized loss expectancy for this risk of £200,000. We can mitigate it 90% for £50,000.”
That’s a good ROI if that quantification is accurate, which I highly doubt it is. But let’s assume it is. But then, what if you increase the scope of it: “We’re going to spend, let’s say £50K to eliminate a risk of £100K. What could marketing do with an extra £50K?”
And if the answer is, “Well, marketing could probably give us an extra million quid in sales if we give £50K,” does it still make sense to spend that money in security? Isn’t it better for the business to not do the security and do the marketing instead? How do you reconcile that?
I don’t think security should be risk-led at all. I think it should be business-led and a quality function – fixing your engineering defects.
“I don’t think security should be risk-led at all. I think it should be business-led."
For example, by fixing the engineering defects that are introducing the security vulnerabilities, I am lowering your AWS costs by €2 million a year. I’m doing a security review of your Salesforce (this is an actual scenario), in which I spent €20,000 and have removed all the excess accounts, reducing your spend on Salesforce by €48,000 per year. I’ve just made you money by securing you.
If you look at security as a quality function on pure cost savings and agility enablement, you can justify it. The risk reduction is a byproduct. You don’t even have to count it. It’s just gravy. That’s the approach I’ve been taking because I can actually save the company more than double the cost of the security function – demonstrably.
I probably can’t even demonstrate half of what I’m saving them, but I can demonstrate that I’m saving them more than what they’re paying me!
MF: Where can folks go to learn more about you and the consultancy work? How do they bring you in and have you help them put some of this stuff into practice?
GVDG: I wrote a book about three and a half years ago called Rethinking InfoSec, which was just an amalgamation of articles. I’ve recently done a collaboration with Hitachi Vantara, a book called What We Call Security. That one is really calling out this quality approach because what we are doing simply is not working. Every year we spend more and every year we spend more as a percentage of budget. It’s unsustainable.
I’m also starting a new consultancy. I’ve not actually launched it, but by March it’ll be out there. The website’s up, it’s sequoia-consulting.co.uk. I’m hoping to really help people address these high-level, strategic structural leadership issues.
Listen to the latest news, tips and advice to level up your security game, as well as guest interviews with leaders from the security community.
Subscribe to our podcast
Android enthusiasts, your time has come. If you own a phone or tablet running Android 14 or higher, you can now save and sign in to many Android apps using passkeys.
Today’s announcement builds on the passkey support we released for the desktop version of 1Password in the browser and 1Password for iOS last year. Mac, Windows, iOS, Android – no matter your platform preference, you can now go passwordless and start unlocking the web in a faster and more secure way.
We’re thrilled that so many people have started using passkeys, and are delighted that Android device owners can now embrace them too.
Passkeys are a new kind of login credential that lets you quickly and securely log in to accounts on your desktop and mobile devices. They’re a form of passwordless authentication – so there’s no password involved – that are backed by the largest technology companies and built on open industry standards.
Curious how passkeys work? Behind the scenes, the passwordless credential relies on public-key cryptography. That means every passkey consists of two parts: a private key and a public key. The private key is just that – private – and never shared with the service you’re signing in to. The other part is the public key, which is seen and stored by the website or app.
When you sign in with a passkey, the website or app creates a technical “challenge”, which is a bit like a special puzzle. You “sign” this challenge with your private key, which is then verified by the website or app using your public key. This quick back-and-forth relies on an API called WebAuthn, which was developed by the FIDO Alliance (1Password is a member of the Alliance!)
You can think of passkeys as the modern successor to passwords. Here are just a few ways that the two differ:
Here’s what you need to start saving and signing in to Android apps with passkeys:
Remember: You can also use 1Password on any Android device to view, organize, share, and delete your saved passkeys.
You might be wondering: what’s the benefit of saving my passkeys in 1Password instead of Google Password Manager? It’s a great question.
Here are two reasons to choose 1Password:
1Password works everywhere. Google Password Manager is designed to work in three places only: Android, Chrome, and ChromeOS. 1Password, meanwhile, has thoughtfully-designed apps for every platform and supports every major web browser including Chrome, Firefox, Edge, Brave, and Safari.
1Password helps you organize your entire digital life. Google Password Manager is focused on passwords and passkeys. 1Password goes beyond a simple password manager by letting you store, manage, share, and conveniently autofill credit card numbers, addresses, documents, and all of your other sensitive information.
Creating a passkey for the first time couldn’t be more straightforward. First, open Watchtower (you’ll find it in the navigation bar at the bottom of 1Password for Android) to see which of your logins can be updated to use a passkey.
We recommend these three Android apps if you’ve never created or used a passkey login before:
If it’s not already on your device, download the relevant Android app from the Google Play Store. Next, open the app and select the option to start using a passkey – it may be on the sign-in screen or in your account settings. Follow the instructions and, if prompted, choose to save your passkey in 1Password.
Once you’ve created and saved a passkey, you can use it every time you want to sign in to the associated account.
We’re delighted that so many Android developers have already updated their apps to support passkeys, and look forward to seeing the options grow in the coming months.
We’ve said it before and we’ll say it again: we’re all in on passkeys, and believe they’re our ticket to a truly passwordless future. This type of login credential offers a faster and more secure way to sign in to online accounts. It’s supported by a growing number of websites and apps, as well as all of the major operating systems and password managers like 1Password.
If you want to be an early adopter and fully embrace this new era of online security, 1Password is the way to do it. For years, we’ve offered a safe home for your passwords, credit cards, medical records, and more. And we haven’t tied you down into any specific platform or ecosystem. Now you can add passkeys to the list of data that 1Password keeps secure at your fingertips.
Ready to create some passkeys? Learn how to get started with 1Password for Android.
Get started with passkeysSave, manage, and securely share passkeys on your Android devices using 1Password for Android.
Download 1Password for Android
When you work in IT, you have a lot to manage. And while everything can feel critical – keeping the computers on might not mean much if your small business experiences a data breach.
According to recent reports, cyber attacks are currently disproportionately targeting small businesses.
“70% of cyber attacks target small businesses” – Business Insider
With the average global cost of a data breach being $4.45 million, many small business owners simply don’t have the capital to survive the damage caused from a cyber attack. From losing critical data, time spent trying to recover, and a loss of customer trust, it’s not surprising that 60% of small and medium-sized businesses (SMBs) that are hacked go out of business within six months.
But while the stakes may be high, IT teams can protect their businesses by bumping security up their to-do list and prioritizing proactive security measures.
There are many different types of cyber attacks businesses need to protect against but we’re going to focus on four threats: phishing, weak passwords, reused passwords, and shadow IT. All of these risks have one thing in common: credentials.
Phishing attacks are a type of scam designed to trick people into sharing sensitive information. Often taking the form of emails, cybercriminals are in search of passwords, logins, or other secrets that they can use to gain access to secure systems.
Password reuse is one of the most common security vulnerabilities businesses face. If the same password is used for multiple accounts a hacker just needs one login to gain access to all of the other accounts. And so if a single reused password is caught in a data breach, it could lead to multiple accounts being compromised.
Probably the most obvious risk is weak passwords that are easily guessed or cracked. Brute force, dictionary, and social engineering are all common attack types that take advantage of weak passwords.
Shadow IT refers to the apps your employees use that IT doesn’t know about. If a password is caught up in a data breach in a shadow IT app, the IT team would have no idea to request employees update passwords on those accounts, or if any important information has been exposed.
Credentials are basically the lock on the digital front door of your business. But unlike a physical building with one or two entrances, your online space can have infinite entry points.
Indeed, each new account for every app by every employee creates a new door that gets locked behind a password. This exposure is what makes access control one of the most important parts of your cybersecurity strategy.
If every login is seen as a door into your business then the one who holds the keys can be seen as the employees who create the locks. When it comes to credential security, employees aren’t deliberately putting their company at risk when they fall to phishing scams, or when they use weak passwords or apps that fall outside of security’s purview.
Like IT teams, employees are trying to get their work done. Security policies can sometimes feel like a barrier to that end goal. Having to remember multiple complicated passwords slows down sign-ins when employees just want to get into an app. It’s convenient to use the same password for everything, but it’s definitely not secure.
And when it comes to using apps outside of the IT team’s purview, employees are usually just trying to use the best tool available. With a long to-do list, IT teams don’t always have time to review apps, and so employees just quietly use what they need in the shadows.
So what can IT teams do?
IT teams in small businesses are, unsurprisingly, usually quite small – sometimes even having just one person responsible for IT, security, and more.
Trying to manage security effectively alongside competing IT and business responsibilities can require a constant act of juggling priorities. With limited bandwidth this can create a constant reordering of to-do lists, trying to just stay on top of incoming requests and leaving little room for proactive work.
The way work gets done has significantly shifted as businesses move to hybrid models and some require employees to use their own devices. And as new apps to get work done come into play the challenge to secure every employee, on every app, in any location is only becoming more complicated.
Even if an IT team has managed to put security policies in place, making sure employees are following them is a whole other story. It can be easy to think security challenges are the IT team’s responsibility, but business cybersecurity is a team sport – you’re only as strong as your weakest link.
Creating a culture of security helps your team prioritize while also working with them. A few high level ways you can make the two work in harmony are by providing flexibility, increasing security adoption, and improving your overall security posture.
Security and productivity don’t have to be a one-or-the-other option. Check out our ebook Small business. Large security risks. for a more detailed look on how to keep your business safe and productive.
Read this ebook to learn how securing access to sensitive information and maintaining productivity doesn’t have to be a one-or-the-other option.
Download now
1Password doesn’t just keep your personal and work-related data safe. It also helps you keep them separate – and your company’s 1Password Business accounts include free 1Password Families memberships for all team members.
1Password Families is a personal account for you and up to 5 family members. It works in much the same way your business account does – but instead of being owned by the company, you own it. And instead of admins managing the account, family organizers manage it (that’s you, and anyone else you designate).
Because you own the account, if you and your employer ever part ways, you can keep using your Families account by simply updating your payment method. Access won’t be interrupted, and the personal data in your account will remain yours, completely unaffected by your departure from your company.
Employers never have visibility or access to anything stored in personal accounts. In fact, your company’s 1Password Business account and your 1Password Families account aren’t connected in any technical way. You simply have access to a free 1Password Families account by virtue of your employer’s 1Password Business account.
1Password Business | 1Password Families |
---|---|
Managed by your employer | Managed by you and/or a family organizer |
Paid for by your employer | Free when linked to a Business account. Paid for by you if you leave the business account |
The account can be deleted by your employer at any time | The account can only be deleted by a family organizer in your family account |
Why offer free Families memberships to 1Password Business team members? Because separating your business and personal information and logins helps foster the ideal security culture: work information in 1Password Business accounts; personal information in 1Password Families accounts.
Mixing personal information with work information is a risk for you and for the company – especially when either one contains vulnerabilities like weak or reused passwords.
More than that, though, we offer free Families accounts for the same reason we offer 1Password at all, to anyone, and for the same reason we built it back in 2006. It should be easy to navigate your digital life securely. Every protected login is a win.
Redeeming your free 1Password Families account is easy. Follow these steps if you haven’t yet redeemed a Families membership:
If you do already have a 1Password Families membership, you can use it for free by linking it to your 1Password Business account:
If your 1Password Business admin has enabled the policy to help separate work and personal information, 1Password Watchtower can let you know if any items may be in the wrong account. In addition to tiles for things like weak passwords or compromised websites in Watchtower, you’ll see a tile for items you may want to move. Select “Show all items” to see them all as a list.
To clean up your work and personal accounts, and make sure each item is in the appropriate account, you can drag-and-drop items between vaults.
If you’re using 1Password for Mac, Windows, or Linux, make sure you’re signed in to both your Business and Families accounts, and click your account or collection at the top of the sidebar and choose All Accounts. Then, just drag existing items to a new vault to move them.
If you’re using 1Password on iOS or Android, select (or multi-select) the items you want to move. Next, tap the item menu and select “Move,” then choose the vault to move the item(s) to.
Visit 1Password Support for complete instructions for all platforms.
1Password Families is the easiest way to protect and securely share passwords, financial accounts, credit cards, and other sensitive information with the whole family. Learn how to invite your family, create a recovery plan, and more by visiting 1Password Support.
Fumbling with an app when you’re already stressed? We know the struggle. Also, is it just us, or does it always happen when you’re already having a bad day?
It may seem silly, but sometimes, a few extra clicks or typing can feel painful when you’re just trying to get stuff done.
That’s why, in 2024, we’re focused on making 1Password smoother, simpler, and more intuitive. We’re dedicated to making sure the secure thing is always the easy thing.
Throughout the year, we’ll continually improve 1Password so it can reliably work as you expect. No more struggles. We’ll keep you updated on added and improved features along the way, because every click and tap should feel effortless. The seamless experience you deserve.
Since the end of 2023, we’ve already made nearly 200 updates to 1Password. These updates focused on overall performance, reliability, and usability with the goal of simply making sure things work better.
This round, we devoted our energy to the browser extension experience and search within the 1Password apps because they are the quickest, most effective tools to find and use your passwords while online. We’ll be working on plenty more updates in the near future, so stay tuned!
In this blog, we’re sharing some highlights on how we’ve improved several different features in the 1Password web browser extensions and 1Password apps including:
1Password browser extensions:
1Password apps:
If you’d like to learn more about all the updates we’ve made, check out our release notes for all the details.
Some private web elements used to prevent the autofill dropdown menu from appearing or prevent autofill on login forms on certain websites.
We’ve addressed this, so you can now seamlessly autofill on more sites, login forms, and much more! The 1Password browser extension will now work more regularly in username fields, email fields, addresses, and form credentials. The extension will also autofill more efficiently on hundreds of sites, like Reddit and CVS.
Plus, we made a fix for many top popular sites, like Walmart and ESPN, that used to close the 1Password browser extension before it could complete autofilling.
The 1Password browser extension now leverages smart titles for the top 900+ sites online. Previously, a site like American Airlines might have been automatically labeled simply as “AA” for the title of the item, but now, it’s accurately and automatically titled as “American Airlines,” making it more contextually relevant to the item you’re saving.
This streamlines the process of creating and saving new items faster and more accurately, and also makes it much easier to search for and find items later on.
If you’ve dealt with the pain of an unstable internet connection, you’ll love this update. Before, if you tried to save an item in the browser extension but you were offline, the item would save locally in the extension, but wouldn’t sync across your other 1Password apps, like on your phone or tablet. That means you couldn’t access that item until the connection was re-established, which wasn’t happening quick enough.
Now, the browser extension will better recognize when you’re on or offline, meaning if you’re ever disconnected and then reconnected to the internet, your password will save and sync across all the 1Password apps faster.
If you’re a Chrome user, you know the browser updates quite often as Google pushes out new features and security updates for users. Previously, this caused interruptions in the connection between the 1Password for Chrome browser extension and the 1Password desktop app, leading to frequent password re-entry to unlock the extension again after Chrome updates.
We heard your feedback on how frustrating that was, so now, whenever Chrome issues a pending update, you’ll no longer need to unlock 1Password, experiencing less interruptions to your daily tasks.
With this new update, we estimate that we prevented nearly 20 million instances of unexpected re-authentication. With each login taking about two seconds, we’ve collectively saved our customers approximately 462 days worth of time. That’s enough time to watch the entire The Lord of the Rings Trilogy Extended Edition 932 times or sail a pirate ship across the Atlantic Ocean 18 and a half times. Phew, that’s a lot of time saved!
Before, the 1Password for Safari browser extension didn’t filter suggestions in the autofill menu like it did for extensions for other browsers, like Chrome and Firefox.
We’ve fixed that, now making it easier to find and autofill the right details depending on what site you’re on – plus, if you have to use different browsers for any reason or end up switching some day, you can expect a consistent and familiar 1Password experience.
Now in the 1Password beta extensions, when you sign in to a site that you haven’t yet stored a credential for, 1Password will automatically create and save the credential for you after you’ve logged in – meaning you no longer have to manually save an item before you sign in.
Not only does this make saving your logins easier, but it also means you no longer have to manually update an item if you entered the wrong username or password and already saved it to 1Password.
But wait, there’s more!
Autofilling credentials is handy, but you often have to manually submit the form you’ve filled, keep clicking to progress through multiple pages, and select autofill again in the next form fields that could come up, like two-factor authentication (2FA).
That’s too much work, so we’ve introduced improved autofill automation for your logins. Now, once you’ve selected a credential to autofill, the rest of the process autofills, auto-submits, and auto-progresses through multi-page sign-ins all on its own, including 2FA codes.
Previously, if you were searching for something in 1Password, the result would be shown within a list of all your 1Password items. This means if you picked the wrong item or wanted to look at multiple items with similar characteristics, you had to start your search completely over. Now, we only show you the list of items that match your search until you initiate a new search.
Plus, search filters are now visible and usable across all of the 1Password apps. This means you can easily see all of your recently searched items on all devices for faster searching. We’ve also improved functionality to support search queries from customers, so we will keep making search even better this year.
We’re continuously working to make sure 1Password is simplifying your online world, all while keeping you safe. With subscriptions based on your needs, you can protect yourself, your business, and even an enterprise, with the most reliable password manager around.
Your feedback about 1Password is incredibly valuable to us. Without you, we wouldn’t have been able to make all these improvements, or all the ones to come. Keep letting us know what you think – we can’t wait to hear from you!
Keep all of your accounts secure with 1Password, the world’s most-trusted password manager. Get started today with a free 14-day trial.
Try free for 14 days
This is the fourth and final post in a series on how to secure your hybrid workforce. For a complete overview of the topics discussed in this series, download The new perimeter: Access management in a hybrid world.
In the initial post in this series, we outlined four key considerations to securing your hybrid workforce: identity, shadow IT, the security vs. productivity tradeoff, and security costs.
Now that we’ve seen why identity is the right place to start, and how to secure access to both managed and unmanaged apps, let’s talk about worker productivity and cybersecurity costs.
Security software is notoriously hard to use. Instead of making things easier for end users, security tools often introduce new frictions into workflows. Hence the perpetual dance between security and productivity.
The situation also pits IT and other employees against each other. IT’s goal is to reduce their attack surface to avoid a security breach. Employees want to get things done. If security software is hard to use, those two goals are at odds. It’s zero-sum.
And when productivity and security face off, productivity often wins. A recent study found that 85% of employees knowingly broke cybersecurity rules to accomplish a task. IT and security teams are left with an impossible choice: Impose more tools and security measures to strengthen their cybersecurity posture, or reduce friction to help employees get things done. Either you reduce the risk of a cyberattack, or you make workers’ lives a bit easier. You can’t do both.
But those workarounds aren’t a malicious attempt to thwart IT. It’s just people trying to do their jobs. Employees are using their personal devices and preferred apps to get the job done, not to sabotage the company’s security posture.
Resolving the paradox requires expecting more from our security solutions, specifically in terms of user experience.
To illustrate how we might do that, consider the desire path. When building spaces, landscape architects (naturally) include paved walkways in their plans. But those paved walkways aren’t always the preferred route of those who use the space.
When people continually cut across the grass of a park, for example, and eventually wear down the grass to create an “unofficial path,” that’s a desire path. It wasn’t in the designer’s original plans, but that doesn’t matter to those using the space – they’re just trying to get from point A to point B as quickly as possible.
Hybrid work has created a similar, digital desire path. Instead of using only the apps managed by the company, they’re using shadow IT – both on company devices and personal devices – to get things done. That introduces new vulnerabilities. But what if IT could simply secure that desire path, instead of trying to force workers to stick to the paved walkways they’ve been avoiding?
If a security tool is hard to use, people won’t use it. Consider a few findings from 1Password’s Unlocking the login challenge: How login fatigue compromises employee productivity, security and mental health:
And that’s just logging in. If IT teams not only understood these frustrations, but did something about it – say by providing an enterprise password manager (EPM) that did the work of logging in for them – both security and productivity would win.
Let’s say Taylor, a new employee, is setting up a new Airtable account to check the publishing calendar for their role on the social media team. Instead of creating a weak password that’s easy to remember, or reusing a password, Taylor uses an EPM to generate a strong, random, unique password.
Because admins can customize password policies, the password Taylor creates automatically complies with company security policies. And Taylor doesn’t have to remember that password or record it. The company can even mandate multi-factor authentication, which modern EPMs support.
And the next time Taylor logs in, they don’t have to guess how they logged in. Was it an email and password? Did they log in with their Google account? SSO? A passkey?
It’s all moot if their EPM remembers for them, and automatically logs them in. And when they need access to the company Instagram account (for which there’s only one login for everyone on the team), a colleague can securely share those credentials with Taylor.
To secure access to shadow IT, you have to make it easy for workers to do their jobs securely. They have to want to use the security tool you’re offering. And that only happens when that security tool helps them get things done, instead of getting in their way.
Security can feel like a game of whack-a-mole. New technologies pop up, workers adopt them, and IT rolls out new tools to address the vulnerabilities those tools introduce.
It all adds up. Overhead and tools are two of the biggest contributors to cybersecurity costs. But it is possible to create efficiencies across both.
IT spends a surprising amount of time resetting passwords. 57% of IT workers reset employee passwords up to five times a week, and 15% do so at least 21 times per week.
That leads to IT spending nearly 21 days of work each year on tasks like resetting passwords and tracking app usage.
But both IT and workers can wrestle back a significant portion of that time with an EPM. For example, in The Total Economic Impact™ of 1Password Business, Forrester found that deploying 1Password results in:
SSO and EPMs can work well together within an identity and access management (IAM) framework. SSO secures access to applications managed by IT, while EPMs secure access to unmanaged apps, or virtually everything else.
But the costs of SSO can add up. It can take weeks or even months to implement SSO, and each application placed behind SSO needs to be configured. EPMs require less custom configuration – it’s a one-time setup and doesn’t require every app to be configured.
And even once SSO is deployed, it only secures access to 50-70% of the apps in use, according to Gartner. IT will have to dedicate time to add new applications, and many of those applications will charge extra for the ability to integrate with your SSO provider, a cost known as the SSO tax.
EPMs not only secure access to the unmanaged apps that SSO doesn’t cover, but also reduces cybersecurity costs with a less costly rollout and by eliminating the SSO tax.
As a quick recap, here’s what we’ve covered in this series:
For an overview of each of the topics we’ve explored, download The new perimeter: Access management in a hybrid world.
Learn about the four key considerations to securing your hybrid workforce, and why reducing risk starts with securing employee login credentials.
Download now
Protecting remote and hybrid work requires securing both identity and devices, regardless of where employees work.
At this point, it’s safe to say work has changed. But the reality is that for those yearning for employees to return to the office, hybrid and remote work is the modern “business as usual,” and there is no going back. Unsurprisingly, our new way of work has brought a slew of new security challenges that companies struggle to address.
Security is inherently a people problem. And when people no longer predominantly work from a corporate office, relying on security technologies built to secure physical corporate networks, and everything plugged into them, is now creating gaping holes in company defenses.
At 1Password, we’ve always put people front and center of security, striving to create products that are easy to use and make employees more productive. By making the productive way to work the secure way to work, we help companies enlist their employees to be a part of their perimeter defense.
That brings us to today’s news: 1Password has acquired Kolide, a next-generation device security solution.
Why would 1Password acquire a device health and contextual access management solution? The reality is that access isn’t secure if the device doing the access isn’t secure. This is part of the complexity of the modern way we work. Every device, regardless of location, must be secure – just as every log-in, regardless of location, employee, or type of device used, must be secure.
This is where Kolide fits into the 1Password story. Kolide is a leader in device health and contextual access management, and companies need a way to ensure that both the device used and every access request are secure. What also makes Kolide particularly compelling is how the company has taken a similar approach to 1Password and works to enlist employees to deliver better security. This is only possible by providing employees with tools that make security easy to use and adopt, enable them to secure their own activities, and provide them with the context to make the right decisions at the right time.
In fact, Kolide’s philosophy of Honest Security mirrors our deeply held values - that security can only work through a positive relationship with end users, and that privacy must be respected at every stage of the journey, being demonstrated through informed consent and transparency. Kolide’s message is resonating across the market, and leading companies including Databricks, Robinhood, Discord, and Anduril rely on Kolide to secure their teams.
Turning your employees into security advocates is critical, because it’s no longer possible for IT or security teams to micro-manage every device or every application that employees use – especially for remote and hybrid workforces. By shedding light on the currently untenable state of IT and security, corporations can shift their mindset toward an approach where security empowers end users to use the tools they need, while also making them active participants in securing the applications they use. And 1Password with Kolide does just that.
Please join me in welcoming the entire Kolide team to 1Password. We’re thrilled they’re joining us on our shared mission of building a safer, more secure future. And based on Kolide CEO Jason Meller’s perspective below, I’d say we’re well on our way.
“Kolide was founded on the idea of Honest Security, a philosophy that, when combined with the principles of Zero Trust, transforms end users into the most effective security solution IT will ever have. We are combining forces with 1Password for one reason: we both believe every company on Earth needs user-focused device security. With 1Password, we now have the resources to make that belief a reality.”
If you’d like to learn more about how Kolide and 1Password solutions can secure your organization, let us know.
Last week was a hackathon week at 1Password. We take time twice a year to pause our normal day-to-day tasks and focus on exploration and learning. These hackathons are a great opportunity to work with different folks, exercise some different muscles, and have a great deal of fun in the process. I’d love to tell you more about our latest hackathon!
The hackathon’s theme was “Beyond Boundaries”, and it had a few broad categories for staff to choose from:
We encourage everyone in our Tech, Product & Design departments to set aside work to participate in the event, and ask them to self-organize into teams and projects. This means that the hackathon projects aren’t defined by leadership – they’re entirely grass-roots driven.
We recommend folks work with others outside of their team, as this is a great way to meet others and learn from them. This can be a bit of a chicken and egg problem … how do folks know who to work with? Surely they won’t go knocking on a random person’s [virtual] door and say:
“Can we hack together?"
We solve this by having a centralized hackathon project idea list. If there’s something a member of staff really wants to work on, they put it up on the list and see if others gravitate towards it. People can work on any part of the product, meaning they aren’t constrained by the area they normally work on at 1Password. The project board lists the skill sets that would be useful for each project, including non-coding skills, which helps people more easily find a great project to contribute to.
For this hackathon, I personally deviated from our guidance a bit. I’ve recently created a new team, and it’s still in its forming stages, so I proposed that we use this opportunity to work closely together on a project. We added our project to the list and a few developers from other teams joined us because the project appealed to them.
Our hackathons are short. Or at least they feel short. It’s one of those things where any fixed period of time will feel too short as our dreams are always bigger than what the time will allow for. Our hackathons are effectively split into three parts:
Naturally this is where we sit down and actually write our prototype. There really aren’t any limits here other than “fit into one of the broad categories.” The goal is certainly not to write code that will ship to production right away. Instead, we put a strong emphasis on creating a MVP of the concept.
We work hard to prove that our ideas are possible. Words like “hack-crimes” are uttered frequently as developers try to find the fastest way to demonstrate their idea, and folks commonly share their most heinous crimes with the rest of the team on Slack.
The actual output of our three days of hacking away is a video demo, so while we’re building we also need to plan and produce the final video.
Of course, we all want to see what everyone else has built. We used to have each team present their project but as we’ve grown, so have the number of projects. So this approach has become unsustainable. Instead, each team is expected to create a demo video of their project, helping others understand the challenge that their project is targeting, and how it solves the problem.
The only constraint imposed: The video should last only two minutes.
The creativity that comes out of these videos is pretty amazing. Two minutes is simply not a lot of time, so everyone tries to find ways of cramming as much information as possible. And then there’s the production quality! I’m always blown away by the amazing videos that are produced. They’re inspiring, and just a little silly.
These videos are all due by the end of day three. For our latest hackathon, you better believe that I was up until midnight putting the final touches on ours. I was unlucky enough to have the video editing app I was using crash after two hours – and I hadn’t hit the save button. Was I ever thankful that it had auto-saved a few minutes prior to the crash!
Day four is when everyone is expected to watch the demo videos. Some teams create watch parties and view them all together.
A little bit of friendly competition can make anything more fun. The hackathon organizers chose some judges for each category, and all of the participants voted on the “Bits Choice” award. On Friday we do a large call where the winners are announced.
Regardless of who wins awards, we all come out winning (and I don’t just say this because our team didn’t win). It’s a week where we get to set aside our normal routines and deliverables, and scratch whatever itch we may have. It’s amazing to see so many great ideas from so many different teams.
It’s also not uncommon for one or more of the hackathon projects to turn into full-fledged features after the fact. For example, the recently released Nearby Items came out of the last hackathon.
I’d love to share a few of the demo videos that have come out of the Beyond Boundaries hackathon. I want to emphasize that these projects do not necessarily represent our roadmap, and are a reflection of the ideas that individuals have, as opposed to the entire company.
First up we have 1PasswIRC, who aimed to answer the question: “What if we leveraged the End to End Encryption technology we had to power group chat within the app?”
Next is B5X Diagnostics Reports. B5X is what our Browser Extension is called internally, it’s by far the most popular way to use 1Password. This group decided to see how we could more easily get Diagnostics Reports from the app so that we could better support our users.
Lastly we have Webhooks For Item Updates. I love seeing integrations between 1Password and other services, and webhooks are a great way to enable that.
I hope you enjoyed the videos. If these hackathons sound like fun to you, consider joining our team!
Browse our current job openings to see if there’s an opportunity that matches your career goals.
View our open positions
“A man walks into a bank…” That may sound like the start of a joke but as hacker and security consultant Jayson E. Street tells it, it’s really nothing to laugh at. He’s walked into banks, hotels, government facilities, and biochemical companies all over the world and successfully compromised them.
Street is an adversary for hire, Chief Adversarial Officer for Secure Yeti, a DEF CON group global ambassador, and the author of the book series Dissecting the Hack. He sat down with Michael “Roo” Fey, Head of User Lifecycle & Growth at 1Password, on the Random but Memorable podcast to share some fascinating stories about how he “hacks” human nature to get in the literal front door and compromise businesses.
Read the interview highlights below or listen to the full podcast episode.
Editor’s note: This interview has been lightly edited for clarity and brevity. The views and opinions expressed by the interviewee don’t represent the opinions of 1Password.
Michael Fey: How did you get into penetration testing?
Jayson E. Street: In 2000, I found out that you could do security and computers. A VP of an internet bank hired me into network security. For the first 10 years of this new career, I was doing defensive blue team work (defending against attackers). Then I realized: I have to start testing the things that I’m making as if I were a hacker.
Around 2010, I was working for a bank, testing our defenses. That’s when I discovered I was really good at robbing banks. I started doing that more, as well as consulting. I branched out to robbing hotels, research facilities, and government facilities.
In 2016, I started a thing that’s never been done before: security awareness engagements where I use red team tactics (attacking cybersecurity defenses), but for educational purposes.
One of the things I love about Secure Yeti is that they believe in this too – that it’s about education, not exploitation. It’s about educating people so they can become better. The red team only exists to make the blue team better. We’re there to help validate their security, build them up, and teach them what they need to do – not just try to tear them down and break stuff.
MF: Can you walk us through your process for penetration testing? I’m sure the ultimate goal is getting in and getting the prize, but how do you approach it?
JES: Honestly, that’s not always the goal. I guarantee to my clients, in our contract, that I will get caught during the engagement. Because again, I’m trying to teach them. If we give them a report and it’s like, “Oh, I just destroyed everything,” the only thing that gets back to the employees is that they failed.
I’ve had to work at giving wins, but I make sure that everybody wins at least once. Then I can say, “Okay, yeah, we have to work on these things. But hey, look at Ann. She didn’t open the door for him. She questioned him. She checked his ID, she reported it to security and he got caught.” It makes it a little more of a positive experience.
There are so many red team people who are so focused on winning and think: “I’m going to go in, I’m going to punch them in the face and shoot the guy.” There’s all this toxic masculinity throughout the red team, unfortunately.
My whole thing is, I don’t want my clients to see sophistication. I want to show them how bad the situation really is – how basic it can be.
" I want to show them how bad the situation really is – how basic it can be."
I’ve got a video that I did at a talk. I use a hidden camera to show how I literally walk through the front door of a bank while employees are still on their lunch break and compromise the first machine in 15 seconds. I finished the attack in under 30 seconds.
An employee did the right thing and stopped me, but then she allowed me to do sort of an interception of the conversation where she thought that I was going to be honest when I talked to the manager. She escorted me to the manager’s office, the manager saw that I was waiting, but there was someone else in the office. The employee believed me when I said, “I’ll talk to him,” and I dismissed her and she left.
I went into the manager’s office and assumed the role of, “I’m here with the help desk. We’re trying to make the network faster.” He escorted me to every machine, and I did a 100% compromise of every machine in that branch, including the wire transfer computer and the network servers. He gave me full access to everything, and he walked with me to do it.
MF: Wow.
JES: Everybody worries about Zero Days. It’s like, “Oh, I got to worry about AI. I got to worry about all this blockchain and the kill chains coming in at us.” And I’m like, “You keep talking about how we need to secure low-hanging fruit. Screw the tree, OK? You’re not ready for the low-hanging fruit. You’ve got fruit rotting on the ground. Pick that stuff up, do some proper asset management, and do some proper patch management."
We want to keep looking at all these other things that we’re supposed to be defending against when it’s the simple stuff of someone walking in off the street. Or someone sending an email that ends up costing a company $300 million.
MF: Can you recall an infiltration where you really had to do your research? Maybe you used social engineering, or monitored people’s patterns at work?
JES: One time I was robbing an institution in New York City. It was across the street from Ground Zero in the financial district. It was very high security. They did not expect me to get in. This is the reason why I still say to this day that the only thing worse than no security is a false sense of security.
They had canine SWAT police officers patrolling the mall and the lobby areas. They had four to six security guards. In the main elevator lobby, you had to show them your driver’s license and get an ID name tag with your picture on it before you were allowed to go through the metal detectors, which led to the elevator and up to the office.
I went in on the first day. I went up to the security desk to see if I could get a job interview. They were like, “Nope, you have to call ahead.”
So the next day I go back in. By the way, you always try to attack people in office buildings with building security between the hours of 4PM and 6PM. The 7AM to 3PM shift, that’s your A team, the people who are on the ball. The 3PM to 11PM shift goes to new hires, the ones that aren’t set in the patterns, the ones that don’t know everybody.
“You always try to attack people in office buildings with building security between the hours of 4PM and 6PM."
When I showed back up the next day around 4:30PM in the afternoon, the company was having a meeting upstairs and there was another guy waiting to get up there, too.
I did a crosstalk attack like I did with that bank manager. I talked to one security person and then I talked to the other one and they saw me talk to that person. They made my ID and created my badge. I struck up a conversation with a guy who was legitimately going to this place like, “Oh, you’re going up there, too?” “Yeah.” It made it look like we were together. So when the receptionist came down to escort us up in the elevator, she made the assumption that we were together.
As soon as we got upstairs into the lobby area, I said: “I’ve got to go to the restroom. I’ll meet you in the conference room.” I go and I see an open door that goes to the mailroom. There’s an unlocked computer there and I compromise the first machine. I’ve already compromised their network. And then I go to the break room.
I don’t attack people over social engineering. I attack human nature. How people operate. Being on the spectrum, it’s like I had to be raised to try to watch people and figure out how normal people work, because they’re terrifying. That’s why I’m so successful at robbing people on five different continents.
“I attack human nature. How people operate."
It’s like the biggest myth that society tells us: that we’re so different. The truth is we’re all humans! I don’t care if you’re in China, Singapore, Brazil, or Britain – guess what? You’re the same people. You all still come up with the same assumptions. You still come up with the same kind of attitudes. That’s what I’m trying to rob – I’m going after human nature.
MF: I’m curious to hear a story where you were just completely shut down at every turn, where people did everything right.
JES: I’m so glad you asked that. No one talks about it enough. It’s like everybody wants to talk about me accidentally robbing a bank, or something like that, because it sounds cool.
But I did rob a bank in 2020 where it was a fail. I had robbed the same place in 2019, and I destroyed them. They’d never had a red team engagement where they actually got up into their office area. And within 30 minutes, I was sitting at the desk of the person who hired us. When he came out of a meeting, he saw me at his desk. He had to go with me to take the badge back that I had stolen off of someone’s desk. It was bad. But that’s not the story.
Companies are paying for you to communicate to management why these changes need to happen. I did a report. I didn’t do a nice little written report. I educated management about what was going on, how I was able to do these things. I had security go on a walk with me and watch as I compromised some people live – and their jaws just dropped.
In January 2020, I went back to this client. I changed my appearance. It’s like I knew it was going to be more difficult. I might be recognized. It was a brand-new receptionist. Didn’t matter. I didn’t get in. I walked up like I owned the place. I didn’t even get to the stairs in the lobby before she said: “Excuse me, you need to sign in.” I was like, “How does she know I’m not an employee?”
That year, during their company all-hands meeting, the CEO, who only gets one hour to speak, spent 15 minutes on security. He spent 15 minutes talking about the responsibilities of employees for security awareness, maintaining the security of their personal items, computers, and cubicle space.
“During their company all-hands meeting, the CEO, who only gets one hour to speak, spent 15 minutes on security."
They also instituted color-coded lanyards. If you had a green lanyard, you were an employee. If you had a red lanyard, you needed to be walked in and escorted everywhere. And if you had a yellow lanyard, you were a contractor, but not trusted. I didn’t know that at first. So, I registered. I put the name of the person I’m supposed to be working with, and then of course, I was like, “I need to go to the bathroom.”
Instead of turning left into the bathroom, I turned right down this hallway and compromised two machines right off the bat. I’m technically successful. But that didn’t matter. Because there was a woman who was in her office. She got on the phone and reported me because she knew I was sketchy. It was awesome.
“She got on the phone and reported me because she knew I was sketchy. It was awesome."
I could have gone to the stairs so I could say I ‘escaped’ and therefore won. But no, that’s not what it’s about. So, I start walking toward the receptionist’s office. The guy who I was there to meet was already coming down the hallway because reception reported that I deviated from the path. There was a camera right above the hallway that she gets to watch. She saw that I went the wrong way.
Throughout that whole engagement, even though I compromised every section, someone stopped me. Someone said “no”.
And that’s including the second day. That night, I went back and I got the cleaning crew to let me in. I broke in and I stole all the lanyards – the green ones and red ones and yellow ones. On the second day, I had a green lanyard because those were cool. But they still questioned me and said “no.” They were like: “I’m not allowed to let anybody plug anything into the computer unless I get an email from the help desk. I didn’t get one. If you don’t mind, I’ll call them and verify. And what’s your name again? So I can see if they know you.”
I validated that their security programs were working because, even though I was successful, I was not successful for more than 15 minutes without someone stopping me.
“We need to stop trying to build defenses as walls. What’s more important is how quickly you can detect and how quickly and effectively you can respond."
Humans make mistakes but if they correct it and someone reports it, you’re dealing with a 15-minute breach versus a five-month breach. That’s important because we can’t prevent things. We need to stop trying to build defenses as walls. What’s more important is how quickly you can detect and how quickly and effectively you can respond. That’s the dealbreaker for a company that’s going to survive a breach or not.
MF: I appreciate you making the time for us today! Is there anywhere that people should go to learn more about you?
JES: My main site is jaysonestreet.com. Places I go: hackeradventures.world. And I live-tweet my life on Twitter.
Listen to the latest news, tips and advice to level up your security game, as well as guest interviews with leaders from the security community.
Subscribe to our podcast
ZASTRZEŻENIE:
You likely know you should store and manage your passwords safely. However, even if you are using a password manager, there’s a chance the one you’re using isn’t as secure as it could be. In this article we go over the threats some password managers pose and a powerful, safe alternative.
Most password managers exist first and foremost as browser extensions, small programs that run in your browser. In the case of password managers, this is vital as otherwise they couldn’t perform functions like autofilling passwords or offering to store newly made logins.
Because they run in your browser, most people’s first encounter with password managers is with the ones that come built-in. You’re likely familiar with the default password manager in Google Chrome, Microsoft Edge, Mozilla Firefox, or Apple’s iCloud Keychain.
However, all these have some pretty serious issues. Google Chrome password manager is safe from outside attack by hackers but is a threat to your privacy. It encloses you in the Google ecosystem by making sure you use only its products, thus enabling the company to get as much data as it can about your online behavior. It can in turn sell this data to providers for a tidy profit.
Much the same goes for the iCloud Keychain. Though it’s nowhere near as egregious as Google, it also serves as a way to make sure you use Apple products, without the ability to switch between different types of devices. The Firefox password manager is a lot better, but still lacks features, like not letting you save credit card information.
The only good way to solve these issues is to use a dedicated, standalone password manager. These offer a lot more flexibility, for example not only working as a Chrome password manager extension, but also in other browsers, on mobile, and even on desktop.
This is why we developed Proton Pass: to make sure internet users could have a password manager that can withdraw them from the grasp of Big Tech, while still enjoying state-of-the-art security and privacy.
The biggest difference between Proton Pass and the built-in Chrome password manager is that you can use it on any platform and your passwords come with you. If you have the Chrome browser on your laptop, but have an iPhone in your pocket, you will barely notice the difference when logging into your accounts.
This isn’t the only advantage Proton Pass offers, either. It also uses end-to-end encryption. This secures all your logins, bank cards, secure notes, and important metadata from the moment they’re in your device and keeps them encrypted even on our servers; even if there were a breach — which has yet to happen — hackers will only make away with useless ciphertext.
Like all Proton apps, Proton Pass is open source, meaning anyone can verify our code does what we claim. This is a key part of our belief in the power of transparency and peer review to ensure secure systems. You can see all our code and third-party security audits on our open source page.
Proton Pass also goes the extra mile in other ways. For example, you can store a lot more than just passwords and other login information. You can also add credit card details and secure notes whenever you need to keep something like a PIN or security code safe. We also let you use passkeys on all your devices, unlocking this powerful new tech for everybody.
Finally, we also allow you to add an extra layer of privacy while creating accounts thanks to hide-my-email aliases. These act like surrogate email addresses, hiding your real email address but still letting you receive email like normal. If you start to receive spam or the alias is revealed in a data breach, you can delete it and switch to another.
Unlike Google or Apple, we don’t receive our funding from venture capitalists or advertisers. All our funding comes from our community’s subscriptions to premium plans, meaning we can, and always will, put your needs first.
If that sounds like something you want to be a part of, create a Proton account today and see what it’s like to use products built with users first in mind.
We all have sensitive personal information we’d all rather not share, whether it’s documents, photographs, or even private video. This article covers how to handle sensitive information or records, and what you can do to keep private information private.
Let’s first define what sensitive information is, as it can mean different things to different people. For our purposes, sensitive information is any data you don’t want other people to know without your express permission. This could be anything, too, ranging from your Social Security number (or your country’s equivalent) to tax declarations, bank statements, or family photographs.
There’s also business-related information you may consider sensitive, like financial information, but also market research, client directories, and work product — anything you’ve worked hard to build and want to make sure competitors don’t get their hands on.
Sensitive information isn’t only found in documents, photographs, or videos, either. It could also be something like your bank PIN, a note with some important information, or a file’s metadata.
Luckily, thanks to modern technology it has become very easy to take any information and store it securely, away from prying eyes.
The best way to handle digital information is to restrict access. For physical documents this means using a safe or a locked drawer. For anything digital, you need an electronic equivalent. The simplest thing you can do is take all your sensitive files, put them into a folder on your hard drive, and then password-protect that folder.
However, this is just the first layer of security. As we explain in our article on the principle of “3-2-1 backup”, you need to build some redundancy. If you keep everything on your computer and nowhere else, and something happens to that computer, the information will be gone forever.
This is where cloud storage comes in. As the name suggests, it’s a service that allows you to store files in the cloud. It’s great for sensitive information as it’s both always accessible as long as you have internet access, as well as kept safe in a secure, remote location. If you back up all your sensitive files on the cloud and on your computer, you have the convenience of having them at hand, and the peace of mind that comes with having a backup.
Some cloud storage services are more secure than others. You only have to look at the long list of Dropbox security incidents to see that even mainstream providers can encounter security issues. We created Proton Drive to store and share sensitive files using more advanced security than many other services offer.
Unlike many of our competitors, we use end-to-end encryption to safeguard all the files you upload to our cloud. This means that nobody, not even Proton, knows what’s in your account, and even if we were breached somehow — something that has yet to happen — the hackers would only see unreadable encrypted files. Your sensitive information would still be totally safe.
We also offer unprecedented hardware security: All our servers are encrypted, too, and backed up so that should something happen at our datacenters, your files are safe.
On top of that, Proton Drive also makes it easy to share files securely. When you decide to share sensitive information with somebody, you get to decide exactly who gets to see what you’re sharing. You can even password protect a folder to further increase security. Naturally, you can also stop sharing at any time, or even make links expire after a certain time.
Proton Drive also has the ability to sync certain folders, thus ensuring they’re always up to date in the cloud. This means it works as an automatic backup for all your sensitive files, so you’ll never have to worry about having the latest versions backed up.
Finally, Proton Drive offers one truly unique selling point, namely Proton itself. In the decade we’ve been in business we’ve become a byword for privacy. We’re entirely funded by you, our users, and thus have no shareholders to please or special interests we need to watch out for. Our community is our only priority.
If this sounds like something you’d like to be a part of, create a Proton Drive account today. The first 5 GB is free and will remain that way. The free plan also includes access to Proton Pass, which allows you to store any short notes securely without needing to create an entire new document.
Social engineering is a common hacking tactic involving psychological manipulation used in cybersecurity attacks to access or steal confidential information.
They then use this information to commit fraud, gain unauthorized access to systems, or, in some cases, steal your identity. Businesses in the US, for example, lost over $2.9 billion to business email compromise in 2023. Many of the attacks involved phishing, one of the most common social engineering scams.
By understanding the mechanics of common social engineering tricks and implementing strong cybersecurity defenses, you can better secure your most sensitive, valuable information.
This article digs into the different types of social engineering attacks and explores ways you can protect yourself and your business from falling victim to these deceptive practices.
Rather than targeting weak code, social engineering leverages weaknesses in human psychology to gain access to buildings, systems, or data. Most often, social engineering exploits our natural human tendency to trust.
Cybercriminals are getting better and better at disguising themselves as well-meaning actors, using persuasive language to lure victims into divulging information they wish to keep private and secure.
For example, an attacker might send you an email that appears to come from a well-known company or service, asking you to confirm login credentials or personal information. This type of communication can create a sense of urgency or fear and make you think there’s a problem with your account that needs immediate attention. Many think they are responding to someone who has their best interests in mind and provide the information, such as login details or a one-time passcode — only to have that information used against them.
Social engineering attacks are not confined to email, though this is the most common vector. They can also happen over the phone, on social media, or in person.
Cybercriminals have an extensive toolbox of social engineering tricks.
Phishing involves sending legitimate-seeming emails or messages with the sole intention of extracting sensitive data, such as passwords or credit card information. These emails and messages can appear astonishingly real, tricking you into believing they are from a trusted sender.
Attackers often use a legitimate domain, such as PayPal, to send fake invoices claiming you owe a balance and including a button to pay.
This tactic dangles enticing offers, such as free software, to lure victims into traps that may lead them to unwittingly install ransomware. The promise of a free movie download, for example, could trick you into downloading a file that compromises your computer.
In this scenario, an attacker can trick senior executives into transferring funds or revealing sensitive information. Usually in the form of email, these attacks appear legitimate with urgent requests or malicious links, making them harder to detect.
This involves sending false alarms and fictitious threats to coerce potential victims into downloading or installing software that is harmful. These threats, for example, may claim your system is infected with a virus that requires a special type of security software that is actually malicious.
This tactic, although more elaborate and involved, is another common social engineering move that involves sifting through your trash to find bills, bank statements, pre-approved credit cards, or other documents with sensitive information that can be used for fraudulent activities.
Also called “piggybacking,” this brick-and-mortar tactic involves attackers gaining entry into secured areas by following closely behind authorized personnel. Tailgating exploits the common human instinct of holding doors open for others, especially in busy areas.
You probably heard of the so-called Nigerian prince scam, in which an attacker asks you to help transfer a large lump of money from abroad in return for a cut of the cash. Of course, you must first hand over your bank account details or pay a “processing fee” to get it.
Here, attackers offer services or benefits in exchange for information. A hacker, for example, might offer to fix a computer issue that requires you to download a remote access tool that ultimately gives the attacker control over your computer.
There are several strategies you can use to limit or prevent the risk of social engineering attacks:
Be wary of opening attachments or clicking links in emails from unfamiliar sources, as they may contain malware or point to phishing sites.
If an offer seems too generous without any apparent catch, it’s likely a baiting tactic designed to exploit.
The less information you share online, the harder it will be for attackers to target you with personalized scams.
Keeping your apps and operating system up to date ensures you have the latest protection against new threats.
Regular backups can help you quickly recover from an attack without significant loss of information.
Shredding or otherwise thoroughly destroying documents containing personal or sensitive information can prevent it from being discovered and used maliciously.
Plugging in unknown USB devices can introduce malware to your system. Disabling autorun prevents the automatic installation of potential ransomware.
Adding an extra layer of security beyond just passwords can significantly enhance your defenses against unauthorized access.
Use strong, unique passwords on all your online accounts. Proton recommends using an open-source password manager to help you create and remember strong passwords. Additionally, enabling two-factor authentication (2FA) adds an extra layer of defense. If your usernames or passwords are ever compromised, scammers won’t be able to access your accounts.
In the face of social engineering threats, Proton offers a comprehensive suite of products and features designed to safeguard your digital life.
Proton Mailis built to recognize and isolate phishing emails, significantly reducing the risk of scam messages reaching your inbox. With end-to-end encryption at the heart of our services, we’ve designed Proton Mail with several layers of cybersecurity defenses:
Our encryption extends to forwarded messages, file sharing, and all events organized in Proton Calendar, allowing you to maintain workflow and schedule meetings without compromising security.
Proton VPN also masks your online activities and location from potential eavesdroppers, making it difficult for attackers to gather information about you that could be used in social engineering attacks. For companies, a Proton VPN for Business account grants access to an extensive server network spanning 85+ countries across six continents, guaranteeing you and your employees will always have access to a fast, secure VPN server — no matter where your operations or employees are located.
Proton Drive protects your files from unauthorized access. All your files, file names, and folder names are fully encrypted at rest and in transit to your secure cloud. With a Proton for Business plan, each user in your organization gets 500 GB of storage, providing the space and security your business needs to operate without worry of cybersecurity threats.
Proton Pass makes it easy to securely share logins and — if you’re a business owner — control who has access to sensitive logins. Administrators get additional access to tools to ensure their teams adopt cybersecurity best practices, including two-factor authentication. A Proton Pass for Business account gives you access to 50 vaults, unlimited aliases, and our high-security Proton Sentinel program, which works for both Proton Mail and Proton Pass and has blocked thousands of account takeover attacks since it was launched in August 2023.
Proton Mail also offers a simple-to-use feature called Easy Switch that allows you to seamlessly transition to your new Proton Mail inbox, back up data, and import messages, contacts, and calendars from other email services, such as Gmail. It’s easy to transfer your data to Drive and Pass as well.
When you create a Proton Mail account, you are both protecting your most valuable data from social engineering attacks and helping build a better internet where privacy is the default.
WhatsApp is the world’s leading messaging app, trusted by billions of people around the globe to send and receive messages. However, is WhatsApp safe for sending private photos? Or are there better ways to share photos online privately? Let’s find out.
WhatsApp has set up its privacy protocols to protect it from outside attackers who might try to intercept your messages or steal your private data and pictures. It uses end-to-end encryption, which encrypts your messages on your device until they arrive on your recipient’s phone.
End-to-end encryption is a powerful tool — we use it ourselves in all Proton apps. WhatsApp uses the open-source Signal protocol, which is used by many other messaging apps and has been found to be secure in numerous studies. At first glance, WhatsApp should be safe for sending private pictures. However, encryption isn’t the whole story.
End-to-end encryption protects the contents of your messages, be they a private photo or quick text, so that no one, not even WhatsApp, can see them. However, when you send a message, you generate all sorts of data beyond just the content of your texts, such as the information necessary to deliver the message.
Called metadata, this information includes the device you’re using to send the message, which account it’s going to, when you were last online, and more potentially revealing data points (more details here). These logs are necessary to ensure a message is correctly delivered, and they help WhatsApp analyze traffic and see potential issues. The problem lies with whom it’s shared.
WhatsApp may have started as a small, standalone company, but in 2016 it was bought by Facebook (now Meta). As a result, WhatsApp shares your messages’ metadata with its parent company, something we cover in detail in our article on WhatsApp’s privacy policy.
The short version is that Facebook uses much of your WhatsApp activity — though not your messages themselves — to target you with ads. The company also knows who you send messages to and when, which is pretty scary considering that nobody at Facebook seems to know what they’re doing with all that data.
So, while your private photos may be safe, WhatsApp and, by extension, all Meta organizations know you sent something, when you sent it, and who you sent it to. If this doesn’t sit easy with you, there are ways to share private videos and photos that bypass such intrusive companies.
If you’re worried about the privacy of your messages, you should probably replace WhatsApp with a privacy-friendly alternative like Signal or Telegram. These apps use the same end-to-end encryption but don’t share your metadata with Big Tech. However, when it comes to sending private photos, there’s an even better way than messaging apps.
To make sure you can send private photos and videos and keep them private, we developed Proton Drive, a secure cloud storage service that uses end-to-end encryption for everything you upload and only collects anonymized user data. This is used purely to analyze network activity and make sure you have the best experience possible.
And unlike most of our competitors, Proton doesn’t sell any data to third parties or show ads. We are entirely funded by you, our community.
This allows us to put your and your privacy first and develop features that secure your data, such as secure link sharing. If you send a private photo through Proton Drive, you control who can see it. Not only is the link encrypted, but you can also set a password, set an expiration date, or turn sharing off with a single click.
If you’d like to try out our secure photo storage and see for yourself how it compares to WhatsApp and its data collection practices, create a Proton Drive account today. Your first 5 GB is free.
With the advent of passkeys, plenty of people are predicting the end of passwords. Is the future passwordless, though? Or is there room for both types of authentication to exist side-by-side?
At Proton, we are optimistic about passkeys and have introduced support for passkeys in our password manager. However, we are not ready to predict a future without passwords, and we believe there’s room for both technologies to coexist.
In this article, we go over these questions and tell you how Proton sees its place in this evolution.
Passwordless authentication is a method to log in to your online account or app without using a password. There are a few ways to do this — like using a hardware key, or biometrics like a retina or fingerprint scan — but the easiest and most viable way for most people is to use passkeys.
The tech gets a little tricky, but the way passkeys work is that when you set one up with a service, a key is created. The service holds one part of it, and you hold the other. To gain access, you need to combine the two. This process of creating and combining the keys happens in the background, without you needing to do anything beyond giving permission to use the passkey.
When they’re properly implemented by the service, passkeys are great. They’re secure, easy to use, and it’s tempting to think they will replace passwords and passphrases. Much the same goes for fingerprint scans and hardware keys. They do away with a lot of the hassle associated with authentication. However, dig a little deeper and you’ll see there’s still a case to be made for doing things the old-fashioned way.
Most forms of passwordless authentication have some kind of issue stopping them from being a one-size-fits-all solution in the same way that passwords are. A good example is biometric login, which works great most of the time, but fails the moment your scanner breaks. This is one reason why you always set up a password or PIN before you scan a fingerprint; the more reliable tech acts as backup.
Much the same goes for hardware keys: They work extremely well, but the moment you lose the key, you may be permanently locked out of your accounts unless you have a recovery password in place. As a result, hardware keys are mostly used for two-factor authentication, when you need a second method on top of a password to prove your identity.
Passkeys also have some issues that prevent them from becoming the default. Here is a breakdown.
First off, as a relatively new technology, passkeys aren’t supported by all sites and apps. While implementation is accelerating, passkey fans right now will often come away disappointed when trying to use passkey authentication. This situation will change, but we predict it will take years, mainly due to the tech being tough to implement.
While most major browsers (Google Chrome, Mozilla Firefox, Microsoft Edge) support passkeys, many smaller players don’t as yet, or only in a limited fashion. If you use Opera, Brave, or something even more exotic, passkeys aren’t a great option for you.
There are also issues when using passkeys between platforms. For example, if you use a passkey created on an Apple device, you have to jump through some hoops to make it work with your Google account, locking you out until you use your password to authenticate.
Since passkeys are new, that also means any tech you use them on needs to be new. For example, only iPhones running iOS 17 and Android 14 devices support passkeys, and even then there are issues. If you’re using older hardware and software, passkeys simply will not work.
As a result, as much as we like passkeys for their speed and convenience, here at Proton we don’t believe that passwordless is the only future. Instead, passwords and passkeys will coexist, with some accounts accessible with a passkey and others using a combination of passwords and 2FA.
Because of this, we’ve developed our password manager, Proton Pass, to support passkeys alongside passwords, not instead of them. This isn’t just out of pragmatism, either: As a company that puts our community first, we give you the freedom to choose how best to secure your data for your accounts.
As a company that makes its money purely from subscriptions — no shareholders, no venture capital — we must prioritize your interests. We do this by making sure not only that you’re secure, but that you can choose how that looks for you. If you like the speed and convenience of passkeys, you may use them across all platforms that support it. If you prefer having 2FA for all your accounts, you can do that, too.
If you want to try a password manager that’s not just on the cutting edge but also lets you decide how close you get to the blade, Proton Pass has a free plan that lets you use almost all its features without spending a penny. What better way to get to know the not-quite passwordless future?
At Proton, we have always been highly disciplined, focusing on how to best sustain our mission over time. This job is incredibly difficult. Everything we create always takes longer and is more complex than it would be if we did it without focusing on privacy, and we generally have to do it with fewer resources. This also makes it a path that we walk alone as few other teams share our commitment to privacy and community and, therefore, understand the unique challenges we face day after day.
But we also know that making privacy the default online will take more than just us, which is why we’re always very excited to meet like-minded teams that are purpose and community-driven. In 2022, we met the team from SimpleLogin and joined forces, and today, we’re happy to announce that Standard Notes will also join us to advance our shared mission.
Standard Notes, as the name suggests, is an end-to-end encrypted note-taking application, available on mobile and desktop, that is used by over 300,000 people. Our personal notes often contain some of our most intimate and sensitive data, and protecting them with end-to-end encryption ensures that they always remain accessible only to you. This really makes Standard Notes complementary to the Proton ecosystem of services, and it is one that we have long used ourselves and are excited to introduce to the Proton community.
Both Proton and Standard Notes share a strong commitment to our communities, so Standard Notes will remain open source, freely available, and fully supported. Prices are not changing, and if you have a current subscription to Standard Notes, it will continue to be honored. Proton aspires to do the right thing and be a responsible home for open-source projects, and just as we did with SimpleLogin, we are committed to preserving what makes Standard Notes special and much loved.
In the coming months, we hope to find ways to make Standard Notes more easily accessible to the Proton community. This way, in addition to protecting your email, calendar, files, passwords, and online activity, you can also protect your notes.
Proton has long been guided by our unique values. We’ve always believed in putting people ahead of profits, from our start as a crowdfunded project created by scientists who met at CERN right up to the present day as we safeguard the privacy of over 100 million people. It’s hard enough to run a long-lasting and durable privacy company — even fewer have managed to do it without venture capital or other outside investors.
Standard Notes has been around since 2017 and has withstood the test of time. Standard Notes has also grown without venture capital funding and has demonstrated a commitment towards serving its community. This alignment in values is rare, and creates a natural fit to work together. We are proud to have the entire Standard Notes team join us on our journey, and we look forward to learning from them and growing stronger together. But most of all, we look forward to continuing to serve both the Proton and Standard Notes communities together in the years to come.
If you’re on any Apple device, you’re familiar with the iCloud Keychain, the Apple password manager. It’s a handy tool that stores passwords for you and helps you manage your logins.
For a program that stores all your most sensitive data in one place, you may have found yourself wondering whether iCloud Keychain is safe. While the software appears to be secure, there are a few issues that may lead you to find a better password manager.
The iCloud Keychain is secure from outside attack. It uses advanced encryption to keep your data secure, and Apple is open about how it encrypts your data and when (though the code itself is not open source, as we’ll explain below). As for privacy, Apple can’t see your Keychain data. Though Apple’s reputation for being a privacy-first company has taken a beating recently, the logins you store on your Apple password manager are end-to-end encrypted.
That’s not true of all your iCloud data though. For much of the info you save to iCloud, end-to-end encryption is not on by default, meaning the company can see your data. (Proton uses end-to-end encryption by default for all our services.) See our article on iCloud privacy to understand the limitations of Apple’s cloud storage.
Keychain is safe to use, but that doesn’t necessarily mean the iCloud Keychain is the right password manager for you. It has nowhere near the features you see with competitors, even free ones. Let’s go over some of its biggest issues.
The iCloud Keychain lacks the ability to freely share passwords, letting you only share them with members who are in your Family group. If you want to quickly share a password with somebody, you’d have to add them, giving them more access than you might like. A proper password manager will streamline this process and give you more control over what you share.
Another, perhaps bigger issue, is that the iCloud Keychain doesn’t work very well on non-Apple devices. If you have an Android phone or a Windows laptop, you won’t be able to use anything stored on your Keychain without some serious tinkering. This means you would have to use Keychain on your Apple devices, and another solution on your non-Apple devices, which is a major hassle.
The iCloud Keychain is also closed source, meaning independent researchers can’t verify how it works. If there are bugs or security issues, you’re counting on Apple and only Apple to find and fix them. (Apple’s track record in this regard is not great.) An open-source password manager can be audited by anybody, and that kind of transparency breeds a lot of trust.
Finally, the iCloud Keychain only lets you store certain items, like passwords, passkeys, and credit cards. It won’t let you add secure notes or let you add custom entries. This lack of flexibility can get constrictive when you have something that needs secure storage yet does not fit neatly into Apple’s structure.
Overall, the iCloud Keychain does a decent enough job of keeping your passwords safe. But why use it when there are much better alternatives out there? We developed Proton Pass with this in mind, an open-source password manager that offers the best in security and usability.
As we mentioned before, all our apps use end-to-end encryption, including Proton Pass. This means that nobody has access to your passwords, bank cards, notes, and certain metadata at any time except you and whomever you choose to share them with. Not even we can see what you’re storing. This makes Proton Pass a lot more secure by default.
Of course, we offer more than just security: Proton Pass works on most devices, and has apps for Windows, Mac (coming soon), Android and iPhone. Switching between these requires no effort; the transition is entirely seamless. So there are no compatibility issues like with the iCloud Keychain.
We also let you store miscellaneous items as secure notes, meaning all your secure items can find a home, not just what we deem you should store. Most importantly of all, our interface is laid out intuitively, meaning you can access all items and settings quickly without Apple’s many extra screens.
Best of all, Proton is not beholden to shareholders demanding profit, meaning that we don’t need to target ads at you. All our resources go into creating the very best experience for our community. If that sounds like something you’d be interested in, create a free Proton Pass account today.
We recently announced that Proton Pass now supports passkeys for everyone across all devices.
Universal compatibility is a unique approach to implementing passkeys, unfortunately. Even though passkeys were developed by the FIDO Alliance and the World Wide Web Consortium to replace passwords and are meant to provide “faster, easier, and more secure sign-ins to websites and apps across a user’s devices”, their rollout hasn’t lived up to these lofty ideals.
Instead, the first organizations to offer passkeys, Apple and Google, prioritized using the technology to lock people into their walled gardens rather than provide a secure solution to everyone. This closed approach diminishes the value of passkeys for everyone and makes it less likely that they’ll be universally adopted, which is critical if they’re to ever replace passwords.
At Proton, we believe online privacy and security should be accessible to everyone. If we want to achieve a better internet for all, everyone must be able to take advantage of the latest security advancements.
This article looks at passkeys’ initial promise, how Big Tech has tried to hijack them to serve their own purposes, and how we can ensure passkeys fulfill their potential for everyone.
Passkeys were developed because, as far back as 2013, companies realized they must provide users with a better solution for account security than passwords. To be effective, you must have a unique, strong password for each online account. Since most people have upwards of 100 accounts, this essentially means you must use a password manager to maintain basic account security.
Also, passwords fail to provide the security they promise. As the FIDO Alliance points out, passwords are at the root of 80% of data breaches. Attackers can convince people to share passwords with social engineering attacks, easily harvest them from data breach records, or reuse them indefinitely (or at least until the account owner makes a new password).
Passkeys were created in 2016, and they represent a major step towards reducing our reliance on passwords. Passkeys are based on WebAuthn, an open standard that security keys like Yubikey use.
The idea behind passkeys was to create a solution that removes the burden from users and mitigates some of the worst aspects of passwords. Passkeys themselves are a pair of cryptographic keys, one of which resides on your device. This key can be discovered by apps or browsers, allowing for simple and secure logins, and is synced between devices using the cloud and end-to-end encryption. The result is a phishing-resistant, nearly effortless, secure login.
However, for passkeys to be a true account security solution, they must become universal. Like many online features, passkeys benefit from a network effect. The more sites and services that use passkeys, the better and easier a solution they are for users (with the added benefit of making everyone’s data more secure). Unfortunately, Big Tech has treated passkeys as an opportunity to advance their commercial interests rather than as a tool to provide universal security.
Apple was the first major company to roll out the passkeys in 2022. In fact, it was Apple that first popularized the name “passkey”.
However, Apple focused primarily on optimizing passkeys to work solely with its products rather than making them an interoperable, easy-to-use feature (as one might expect of a tool developed in collaboration with dozens of other organizations and companies). For example, if you create a passkey on your iPhone, it easily syncs to Mac devices but is incredibly difficult to use on a Windows device. In fact, if you try to use a passkey from an Apple device on an Android (for example, if you have a Mac and an Android), you must use a QR code — there is no automatic sync. This unfortunately set a precedent that every other major rollout of passkeys has followed.
In an attempt to catch up to Apple, Google announced passkey support in 2023, but its implementation is inconvenient. For example, if you use Google Chrome as your browser on a Mac, it uses the Apple Keychain feature to store your passkeys. This means you can’t sync your passkeys to your Chrome profile on other devices. Similarly, Android only recently added support for third-party passkey providers (in Android OS version 14). In addition to a poor user experience, Google passkeys are also limited by Google’s attempt to lock you into its platform. For example, if you create a passkey with Chrome on your laptop, you can’t use it in the Firefox browser on your smartphone. And if you like Chrome but want to use a third-party password manager to store your passkeys, Google forces you through a lengthy process to opt out of Google Password Manager.
And both Apple and Google prevent you from exporting your passkeys, meaning you’ll need to create them all over again if you want to switch to another password manager. They also both use closed-source passkey implementations, making it harder for independent experts to verify their security.
After seeing Big Tech’s rollout, several password managers also rushed their release of passkeys, resulting in a similarly clunky user experience. Some password managers only support passkeys via their web extension, making it difficult for anyone trying to log in to the same app with a passkey on their mobile phone. Most password managers that support passkeys only offer them with a paid plan, meaning Google Password Manager and Apple Keychain were the only viable free passkey providers until Proton Pass added them.
Account security is facing a similar inflection point as secure connections did in the early 2010s — the problem has been identified, a simple solution exists, and it’s simply a question of enforcing that solution everywhere. With HTTPS, organizations like EFF (with HTTPS Everywhere) and Let’s Encrypt (which simplified obtaining a TLS cert) led the drive in allowing people and websites to create secure, encrypted connections. Now, all major browsers enforce HTTPS connections by default, and the vast majority of websites support TLS. It has made the internet immeasurably safer.
While passkeys are certainly more technically difficult to implement correctly than HTTPS, they promise an even more sweeping effect on internet security — if we force Big Tech companies to adhere to their original, universal intent.
Passkeys could make nearly every account secure against attacks that cause such havoc today. There’s no such thing as a “weak” passkey, so attackers will no longer be able to brute force their way into accounts. And passkeys can’t suffer mass exposure like passwords because apps and websites only store the public key — the private key remains safely stored on your device. If everyone used passkeys, much of the harmful effects of data breaches would disappear.
Both Apple and Google have made it so that if you make a passkey, you need to stick within their apps and devices to use it. This severely limits their potential and sacrifices their utility just so Big Tech can add a moat to its walled garden.
We’ve tried to stay true to the intention behind passkeys. With Proton Pass, passkeys:
Even though it’s unlikely the internet will be passwordless anytime soon (or indeed ever), we still believe passkeys should be as easy to use as possible in as many places and for as many people as possible. If you want to use passkeys to improve your account security and speed up your logins, you can sign up for Proton Pass for free today.
And if you believe in our mission and want to help us build a better internet where privacy is the default, you can sign up for a paid plan to get access to even more premium features.
Your private videos are for your eyes only. However, not all cloud storage services are good at storing videos securely, let alone privately. In this article we explain what you can do to keep file sharing companies from having access to the videos you upload and share online.
The sad state of affairs is that most cloud storage services can see the files you upload — though whether they actually do so is another matter. This is down to the type of encryption used. Normally, when you upload a file, it’s encrypted twice: First while it’s being sent, called in-transit encryption. Once it arrives, it’s then decrypted before then being encrypted again with what’s called at-rest encryption.
The reasons for this are historical more than anything. Before we had high-speed internet, it made sense to use two kinds of encryption. The in-transit encryption would be extra secure to make sure that even if your data was intercepted it would be unreadable, while files kept on servers would use a fast cipher so you could quickly access them.
The downside to this is that the service you’re using can see what you’re storing. In this system, the service always has the key to your data, and you have no real control over your files.
The only way around this is to use a different type of encryption, called end-to-end encryption. In this scenario, data is encrypted from the moment you upload it until you download it again. At no other point is it decrypted, meaning nobody can sneak a peek at it. Better yet, you’re the one in control of the keys as they’re generated on your end, meaning you’re in control of your files.
Currently, there’s little reason not to use end-to-end encryption, unless your business model is based on surveillance. Servers and connections are fast enough to handle encryption, and encryption protocols themselves have also significantly improved — a good example being PGP encryption. The best part is that these advanced protocols even let you share private video with others while remaining encrypted.
If you want to use a cloud storage service where your private videos stay private, end-to-end encryption is the key. However, that means you need to find a service that offers this.
This is where Proton Drive comes in, a state-of-the-art cloud storage service that has end-to-end encryption by default. It’s not a paid feature, or an obscure toggle you need to click on: Anything you store with us enjoys full privacy. At no point can anybody but you, and whomever you share with, see the videos you’re storing.
As a result, any video becomes a private video when you upload it to Proton Drive. You can decide who can see it and who can’t, and you can stop sharing whenever you want. It’s even possible to set an expiry date on a link, or password protect them for greater control.
See our article on creating shareable links to learn exactly how to share private videos.
Putting you in control is a core part of our philosophy at Proton. We have built an interface based on your feedback, and all our features were inspired by our community. This is because we always put our users first. We can do this because, unlike most other tech companies, we’re entirely funded by you, our users. No shareholders, no venture capital; it’s just you that’s funding us.
If that sounds like something you want to be a part of, a community with its own voice and our ear, then join Proton Drive today. The first 5 GB is free forever.
Many email services, citing security reasons, require a phone number for identity verification. This creates an unfortunate paradox in which you must give up a highly sensitive piece of personal data to Big Tech.
But there are simple ways to create an email address privately that are just as secure, if not moreso.
Proton Mail lets you easily create a free email account without giving away your phone number. Unlike Outlook or Gmail, we do not spy on you, target you with ads, or profit off your data. All our revenue comes from Proton customers who upgrade for more storage and premium features.
However, even our free accounts provide you with strong privacy protections not available from Big Tech. Your messages and data are protected with end-to-end and zero-access encryption, meaning no one can access your most valuable, sensitive data — not even us.
Follow the steps below to create a Proton Mail email address without using your phone number.
To sign up for Proton Mail, you must create a Proton Account. You can then use that username and password to log in to all Proton services.
To sign up for a Proton Account:
1. Go to the Proton Account signup page in a web browser on your computer.
2. In the Username field, enter the username you want to use for your free email address.
3. Choose which domain you would like to use for your address: @proton.me or @protonmail.com (@proton.me is selected by default).
4. In the Password field, enter a password at least eight characters long and re-type the password to confirm.
5. Click Create account.
6. If you’re asked to confirm you’re human, you can choose between CAPTCHA and Email.
Please note that if you enter your email address, we only save a cryptographic hash of this personal data. It’s impossible to derive your email address from that hash, and it’s not permanently associated with the account that you create.
If you choose Email, you must provide a non-Proton Mail account to receive a code. Input the code and click Verify.
7. Enter an optional display name. This is what people will see when you send them an email. Click Continue.
8. Enter an optional email address or phone number that you can use to recover your account if you ever forget or lose your password. Entering a phone number, however, is not required.
9. Click Save or Maybe later, and you’re done.
You will be redirected to your new secure email account.
When you first sign in to Proton Mail, you’ll see our Welcome message.
Click Next to customize the look of your Proton Mail.
Click Next, and you’ll get the option to sign into Gmail to automatically forward messages to your new Proton Mail inbox.
At Proton, giving you the ability to easily protect your privacy and most sensitive, valuable information in your digital life is central to everything we do. This contrasts greatly with the practices of Big Tech companies, which commoditize your personal data to drive profit.
Our mission is to uphold your basic human right to online privacy. That includes being able to create an email address without using a phone number.
Beyond that simple option, we deploy other privacy-first features, including hide-my-email aliases, which help you keep your real email address private. A hide-my-email alias is simply another address that will automatically forward all emails sent to it to your main mailbox. You receive all messages, but your real email address and identity remain hidden.
Additionally, Proton Mail protects you from unwanted spam and potential phishing scams with several filters, including smart spam detection, spam lists, WKD, DANE, DMARC, SPF, and other powerful tools to ensure any messages that reach your inbox are clean.
Easy Switch is another simple-to-use feature that allows you to seamlessly transition to your new Proton Mail inbox, back up your data, and import existing messages, contacts, and calendars from other email services, such as Gmail.
When you create a Proton Mail account, you are both protecting your most valuable data and helping build a better internet where privacy is the default.
ZASTRZEŻENIE:
Privacy Pro bundles three new protections from DuckDuckGo into one easy subscription. Subscribers get:
Getting these services separately from other companies could cost upwards of $30/month; our users can subscribe to Privacy Pro for $9.99/month or $99.99/year. Privacy Pro is currently only available to United States residents, but we plan on expanding to other regions in the future. Sign up at duckduckgo.com/pro and make sure you're using the most up-to-date version of the DuckDuckGo browser on all your devices.
Every day, tens of millions of people rely on DuckDuckGo to add a layer of privacy to their online activities. The centerpiece of our product offering is now the DuckDuckGo browser, which offers the most comprehensive set of free privacy protections by default. (One immediate benefit? Fewer ads and popups than you’d see on other browsers.) Our browser bundles our private search engine, tracker blocking, Email Protection, and more than a dozen other free privacy features in one convenient package. However, there’s only so much protection we can provide for free. For example, some protections, like securing our users’ network connections with a VPN, require significantly more bandwidth and other resources.
Enter Privacy Pro: a three-in-one subscription service that offers even more seamless privacy protection. Privacy Pro subscribers get a fast, secure, and easy-to-use VPN that doesn’t log your activity; Personal Information Removal, which helps remove your information from “people search” data broker sites that store and sell it; and Identity Theft Restoration, which helps to fix credit report mistakes and recover any resulting financial losses. (Please note: Setting up and managing Personal Information Removal requires a Mac or Windows computer.)
On its own, the DuckDuckGo browser lets you search and browse privately. By adding Privacy Pro, you can also limit data brokers’ access to your personal information and secure your Internet connection across your whole device, which hides your location and device IP address from sites you visit — all in one place.
Adding a Privacy Pro subscription makes the DuckDuckGo browser's best-in-class protections even stronger.
At DuckDuckGo, we don’t track you; that’s our privacy policy in a nutshell, and this new subscription service is no exception. Guided by the principle of data minimization, we designed Privacy Pro to maximize your privacy:
We’re here to seamlessly protect your privacy — not compromise it.
Read the Privacy Policy and Terms of Service for Privacy Pro.
Our non-logging VPN secures your Internet connection on up to five devices at once.
Get an extra layer of online protection with the VPN made for speed, security, and simplicity — built and operated by DuckDuckGo, not an outside provider. Our VPN encrypts your Internet connection for all your browsers and apps across your entire device, hiding your location and IP address from the sites you visit. Because connections are encrypted, your Internet service provider (ISP) can’t see your online traffic either. And we have a strict no-logging policy; we don’t log or store data that can connect you to your online activity, or to any other DuckDuckGo services, such as search.
No need to install a separate VPN app. Once you sign up for Privacy Pro, you can install our VPN right in your DuckDuckGo browser. After that, you can secure your connection in just one click and check its status at a glance. It offers full-device coverage on up to five devices at once.
Our VPN is simple to use. If your VPN connection gets interrupted for any reason, it attempts to reconnect automatically and prevents data leaks until the reconnection is successful. And it works perfectly with DuckDuckGo’s other protections; if you’re an Android user, you should know our VPN is the only one compatible with App Tracking Protection.
We currently have VPN servers across the US, Europe, and Canada, and we’ll be adding more over time. To maximize speed and stability, you’ll connect to the closest available VPN server by default, but you can manually choose whichever location you prefer.
To encrypt your traffic and route it through a VPN server, we use the open-source WireGuard protocol, which is fast and secure. We also route your DNS queries automatically through the VPN connection to our own DNS resolvers, which further hides your browsing history from your ISP.
Learn more about the VPN on our Help Pages.
Personal Information Removal helps get your name, address, and more off of people search sites.
Ever tried looking yourself up online? Where our other web tracking protections help defend against trackers that gather your personal information while you browse, Personal Information Removal goes one step further: It works to actually remove personal information, such as your name and home address, from people search sites that store and sell it, helping to combat identity theft and spam.
How does it work? People search sites, like Spokeo and Verecor, are a common type of data broker. They collect your personal information from local and federal records, public forums like social media, and even other data brokers, and make it available online. (You’ve probably seen them in search results when you look up your name.) We scan dozens of these sites for your info and, if found, request its removal, even handling back-and-forth confirmation emails for you automatically behind the scenes. Unlike other similar services, we only contact the data brokers once we confirm that you’re in their databases, and the info you enter for scanning is stored on your device — not on remote servers.
To help us build Personal Information Removal from the ground up while maintaining our strict privacy standards, DuckDuckGo acquired data removal service Removaly in 2022. Removaly was a pioneer in the data removal space, developing a way to navigate data brokers’ confusing opt-out process automatically without compromising users’ privacy in the process.
Personal Information Removal re-scans sites regularly to minimize the risk of your info reappearing, using the data stored on your device. Your device also initiates any removal requests. You can keep tabs on the progress of ongoing removals — and see the personal information we’ve already removed! — on your personal dashboard within the DuckDuckGo browser. Once it’s set up, simply select Personal Information Removal from the browser’s three-dot menu in the upper right.
You'll need to set up Personal Information Removal on one primary Mac or Windows computer. Right now, the dashboard can only be accessed from that device, but we are planning to add the ability to view it from your other devices.
Learn more about Personal Information Removal on our Help Pages.
Get some peace of mind: if your identity is ever compromised, Identity Theft Restoration is standing by to help.
With more than 1 million cases a year reported in the U.S., identity theft is more common than you might think. And Personal Information Removal helps reduce the chance of identity theft, but unfortunately, nothing can totally prevent it. So, let us give you some peace of mind: If your identity is stolen or compromised, Identity Theft Restoration will help you handle the stress and expense.
Identity Theft Restoration is brought to our users in partnership with Iris® Powered by Generali, one of the oldest firms specializing in identity theft in the U.S. Iris’ identity theft advisors are available 24/7, every day of the year, and answer calls within 11 seconds on average. This responsiveness has earned them 18 customer service awards over the last 10 years.
If your identity is stolen, Iris will collect some details about your situation in order to provide assistance; no personal information is shared between Iris and DuckDuckGo. Once a case is established, Iris has several ways to help get you back on track:
Learn more about Identity Theft Restoration in our Help Pages.
Ready to give Privacy Pro a try? Make sure you’ve got the latest version of the DuckDuckGo browser (iOS / Android / macOS / Windows) and head to duckduckgo.com/pro.
Privacy Pro is available for $9.99/month or $99.99/year; your subscription will auto-renew monthly or annually, depending on the payment terms selected, until canceled. If you subscribed via the Apple App Store or Google Play Store, you can manage your subscription and payment methods there. If you subscribed via our website, you’ll manage your account from the DuckDuckGo browser’s Settings instead.
Have you been waiting to try the DuckDuckGo browser? Maybe you’re using our browser on your phone but haven’t tried the Windows or Mac version? Now is the perfect time to make DuckDuckGo the default browser on all your devices, thanks to our latest improvement: Sync & Backup. You could already import bookmarks and passwords from other browsers into DuckDuckGo, but now you can privately sync those bookmarks and passwords between DuckDuckGo browsers on multiple devices.
When you use Chrome, there’s a good chance you’re signed in with your Google account – because they’re constantly pressuring you to do so! There is a convenience in that; all your bookmarks, passwords, and favorites follow you wherever you browse, whether you’re using your computer, phone, or tablet. But there’s a problem. This also gives Google implicit permission to collect even more data about your browsing activity than they would otherwise have and use it for targeted advertising that can follow you around.
At DuckDuckGo, we don’t track you; that’s our privacy policy in a nutshell. We’ve developed our privacy-respecting import and sync functions without requiring a DuckDuckGo account – and without compromising your personal data.
Our built-in password manager stores and encrypts your passwords locally on your device. Our private sync is end-to-end encrypted. (When you use private sync, your data stays securely encrypted throughout the syncing process, because the unique key needed to decrypt it is stored only on your devices.) Your passwords are completely inaccessible to anyone but you. That includes us: DuckDuckGo cannot access your data at any time.
The first step is to download our free browser on one or more devices. (The feature works across most Windows, Mac, Android, and iPhone devices – if you’ve got our browser, you can use Sync & Backup!) If you’re already using the browser, check that it’s up to date. Next, head to the browser’s Settings, choose Sync & Backup > Sync With Another Device and follow the instructions under Begin Syncing.
If you’re on a mobile phone or tablet, you can link devices with a QR code; on desktop computers, you’ll manually enter an alphanumeric code.
Sync passwords and bookmarks between devices by scanning a QR code or manually entering a unique alphanumeric code – no signing in necessary.
Only working with one device? Choose Sync and Back Up This Device from the “Single-Device Setup” section. Once your sync is complete, you can see a list of all your synced devices, edit device nicknames, and fine-tune your settings.
See a list of your synced devices – and add new ones! – under your browser’s Settings > Sync & Back Up.
Once you’re set up, you’ll want to save your Recovery PDF in a secure place. This document contains your Recovery Code, a unique code that will let you access your synced data if your devices are lost or damaged. This is especially important because of our secure end-to-end encryption; your Recovery Code contains the unique, locally generated encryption key that keeps your data private from everyone – including us! If you lose your devices, your Recovery Code is the only way to access your data from a new phone or computer.
With your Recovery Code, you can restore bookmarks, favorites, and other DuckDuckGo settings on a replacement device if yours is lost or damaged.
The DuckDuckGo browser comes with the features you expect from a go-to browser – it even banishes any ads we find that run on creepy trackers, without the need for an outside ad blocker. It also handles cookie pop-ups for you where we can. Plus, over a dozen powerful privacy protections not offered in most popular browsers by default. This uniquely comprehensive set of privacy protections helps protect your online activities, from searching to browsing, emailing, and more.
Our privacy protections work without you having to know anything about the technical details or deal with complicated settings. Just switch your browser to DuckDuckGo across all your devices, and you’ll get privacy by default.
For more detailed instructions on how to use the new sync function – or to peek under the hood of any of DuckDuckGo’s privacy protections! – you can find more information on our Help Pages.
2023 marks DuckDuckGo's thirteenth year of donations—our annual program to support organizations that share our vision of raising the standard of trust online. This year, we're proud to donate to a diverse selection of organizations across the globe that strive for better privacy, digital rights, greater competition in online markets, and access to information free from algorithmic bias.
This year, we’re donating $1,100,000, bringing the total donations since 2011 to $5,850,000. Everyone using the Internet deserves simple and accessible online protection; these organizations are all pushing to make that a reality. We encourage you to check out their valuable work below, alongside details about how our funds were allocated this year.
“EFF’s mission is to ensure that technology supports freedom, justice, and innovation for all people of the world. EFF has been defending civil liberties in the digital world for over thirty years.”
"The Markup challenges technology to serve the public good by producing investigative journalism, unique tools, and accessible resources to inspire action and agency."
"Public Knowledge promotes freedom of expression, an open internet, and access to affordable communications tools and creative works. We work to shape policy on behalf of the public interest."
"Established in 1987, ARTICLE 19 is an international non-profit organization that defends freedom of expression, fights against censorship, protects dissenting voices, and advocates against laws and practices that silence individuals, both online and offline."
“The Common Crawl Foundation was founded with the goal of democratizing access to web information by producing and maintaining an open repository of web crawl data that is universally accessible and analyzable. Our vision is of a truly open web that allows open access to information and enables greater innovation in research, business, and education. We level the playing field by making wholesale extraction, transformation, and analysis of web data cheap and easy.”
"European Digital Rights (EDRi) is the biggest European network defending rights and freedoms online - currently 50+ NGOs are members of EDRi and dozens of observers closely contribute to its work. In 2023, EDRi celebrates its 20th anniversary of existence - 20 years of impact and efforts to build a people-centered, democratic, digital society."
“Founded in 2011, Fight has organized some of the largest and most effective online campaigns in history, with a focus on ensuring that marginalized communities have equitable access to the Internet and technology that is free of surveillance, abuse of personal data, and censorship.”
“Signal Technology Foundation protects free expression and enables secure global communication through open source privacy technology.”
“The Surveillance Technology Oversight Project (S.T.O.P.) advocates and litigates for privacy, working to abolish local governments’ systems of discriminatory mass surveillance."
"Through engaging with lawmakers, exposing false narratives and bad actors, and pushing for landmark legislation, The Tech Oversight Project seeks to hold tech giants accountable for their anti-competitive, corrupting, and corrosive influence on our society and the levers of power."
“As a grassroots-to-global organization, Access Now defends and extends the digital rights of people and communities at risk by fighting for human rights in the digital age through direct technical support, strategic advocacy, grassroots grantmaking, and convenings such as RightsCon.”
“AJL’s harms reporting platform aims to capture people's lived experiences with AI harms, connect them with resources, and identify areas where there are no or few resources.”
“Bits of Freedom shapes tech policy in order to facilitate an open and just society, in which people can hold power accountable and effectively question the status quo.”
"The Competition Law Forum is a centre of excellence for European competition and antitrust policy and law at the British Institute of International and Comparative Law (BIICL)."
“UCLA Center for Critical Internet Inquiry (C2i2), housed in the UCLA Division of Social Sciences, is a critical internet studies community committed to reimagining technology, championing social justice, and strengthening human rights through research, culture, and public policy.”
“Creative Commons (CC) is an international nonprofit organization dedicated to building and sustaining a thriving commons of shared knowledge and culture that serves the public interest.”
"Digital Rights Watch is Australia's leading digital rights organisation. They defend and promote privacy, democracy, fairness and fundamental rights in the digital age."
“The Society for Civil Rights e.V. (Gesellschaft für Freiheitsrechte e.V. or "GFF") is a donor-funded organization from Germany that defends fundamental and human rights by legal means. The organization promotes democracy and civil society, protects against disproportionate surveillance and advocates for equal rights and social participation for everyone.”
"OpenMedia is a community-driven organization that works to keep the Internet open, affordable, and surveillance-free. We operate as a civic engagement platform to educate, engage, and empower Internet users to advance digital rights around the world."
“Open Rights Group (ORG) is the UK’s largest grassroots digital rights campaigning organisation, working to protect everyone’s rights to privacy and free speech online.”
“Open Source Technology Improvement Fund directly helps critical open source projects with their security needs and is extremely grateful for the continued support from DuckDuckGo. This funding is pivotal to ongoing operations and growth, as it is one of our only donation sources that is not tied to any deliverable or project. Over the past year, we have been able to sustainably help critical open source projects improve their security posture, and in the process have found and fixed over 100 significant bugs and vulnerabilities.”
“Privacy Rights Clearinghouse focuses on increasing access to information, policy discussions, and meaningful rights so that data privacy can be a reality for everyone.”
“Restore the Fourth opposes mass government surveillance, and organizes locally and nationally to defend privacy and the Fourth Amendment.”
“Tactical Tech is an international NGO that, for over 20 years, has engaged with citizens and civil society organisations to explore and mitigate the impacts of technology on society.”
“At the Tor Project, we believe everyone should be able to explore the internet with privacy. We advance human rights and defend your privacy online through free, open source software and the decentralized Tor network.”
Founder and CEO, Gabriel Weinberg, celebrates DuckDuckGo's past, present, and future:
Fifteen years ago, I launched DuckDuckGo from my basement in Valley Forge, Pennsylvania, hoping to offer a user-centric alternative to Google. This was 2008 – years before Snowden, a decade before Cambridge Analytica, and more broadly before the world had started to realize the scary power and creepy surveillance of companies like Google and Facebook.
Growth was very slow at first. It was just me behind the scenes for quite a while, putting together the search engine and asking people for feedback. I realized DuckDuckGo was resonating with people when things really started to pick up in 2011, so I started building out the team (many of whom are still at the company today) and we established our company vision to raise the standard of trust online.
Today, that vision remains the same. Fifteen years later we've built something truly rare in tech: a healthy, profitable company that protects user privacy, instead of exploiting it.
People care about their online privacy. That's what fuels our growth. According to a recent Forrester study, 87% of US online adults “use at least one privacy- or security-protecting tool online.”
While our product started as a search engine, today it’s a free, mobile and desktop browser with our private search engine built-in, along with more than a dozen other tracking protections, many that are unique to DuckDuckGo (if you want to know more about them, I’ve added a list below). This is combined with the simple promise laid out in our Privacy Policy: we don’t track you.
We design our product so that this uniquely comprehensive and overlapping set of privacy protections is seamless to users: it just works without having to know anything about the technical details or deal with complicated settings. All you have to do is switch your browser to DuckDuckGo across all your devices and you get privacy by default.
I’ve always believed that the easier we can make getting online privacy, the more people will switch to DuckDuckGo. That’s why our browser and browser extensions have been downloaded more than 250 million times. This has propelled our search engine to hold the #2 position in mobile market share and #3 overall in the U.S. and over 20 other major markets including the UK, Germany, France, India, Australia, and Canada. Over the past three years alone, people have made more than 100 billion private searches on DuckDuckGo.
I want to thank everyone who has and continues to use and support DuckDuckGo. We appreciate you!
And, to those who aren't users, we'd love for you to give us a try, or another try if it’s been a while since the last time. We’ve been continually improving our core search, browse, and email experiences. Looking forward, you’ll see DuckDuckGo introduce new product experiences that similarly work together to help you protect even more of what you do online.
I continue to believe there remains huge, pent-up demand for privacy-respecting alternatives to Google if it were easier to switch search and browser defaults across devices. That is, I believe we’d be much bigger, perhaps as much as ten times bigger, if it wasn’t for Google’s anticompetitive tactics.
In any case, with ever-increasing exploitation of personal data by Google, Facebook, and others, we believe our work is as important as ever. That’s why we’ll remain laser-focused on our product vision of being the “easy button” for privacy.
Now that you know more about what we do and why we do it, I thought I’d also share some things you might not know from our past 15 years:
Your privacy is constantly under threat by companies using your personal data, leaking it, or even selling it to others and then using it to try to manipulate you with creepy ads, discriminate against you, and more. To help prevent this from happening, DuckDuckGo browsers offer the most comprehensive privacy protection by default without breaking your online experience. Because trackers are always working to get around privacy protections, we’ve layered on many types of unique and innovative protections by default that don’t exist in most browsers or browser extensions. We’re continually working to improve these protections while also introducing new protections to address emerging threats.
For those interested, here’s some more info on our various privacy protections:
You get all of this with one download, and more is coming – stay tuned!
DuckDuckGo for Windows is available now at duckduckgo.com/windows! Making the switch is easy; new users can import bookmarks and passwords from other browsers and password managers.
Banish cookie consent pop-ups with Cookie Pop-up Management.
Windows users, this one’s for you! Starting today, our desktop browser for Windows is officially in public beta – no invite codes, no waiting list, just a fast, lightweight browser that makes the Internet less creepy and less cluttered. DuckDuckGo for Windows is already equipped with nearly all the privacy protections and everyday features that users know and trust from our iOS, Mac, and Android browsers – and it’s getting closer to parity with those browsers every day. (More info in the “What’s Next” section below.)
DuckDuckGo for Windows comes with these best-in-class privacy protections switched on by default, leading to a better everyday user experience. By blocking trackers before they load, for example, our desktop browsers use about 60% less data than Chrome. Switching is easy, too; you can import passwords and bookmarks from another browser or password manager in just a few clicks.
Relative to Mac users, Windows users work across a wider variety of hardware and software configurations. During our brief closed beta period, we’ve been gathering testers’ feedback and making improvements to meet as many of those needs as possible, but we haven’t tested every configuration yet, so if you do see any issues, please send feedback!
The browser doesn’t have extension support yet, but we plan to add it in the future. In the meantime, we’ve built the browser to include features that meet the same needs as the most popular extensions: ad-blocking and secure password management.
“This is fast and smooth for performance. It appears to be light on resources—well done!”
“For a beta version, I am extremely impressed thus far with everything about the Windows app. I often forget it is a beta at times, given how well it performs and how protected I feel.”
“I love the cookie manager. It is a wow moment. Keep up the good work, buddies!”
“Wow, this is incredible! Very, very smooth. Excellent browsing experience.”
“Want to know the best feature in DuckDuckGo browsers? It is Duck Player. Install the browser and open a YouTube video. No ads...it plays the video directly. Bye bye, YouTube ads.”
DuckDuckGo for Windows was built with your privacy, security, and ease of use in mind. It’s not a “fork” of any other browser code; all the code, from tab and bookmark management to our new tab page to our password manager, is written by our own engineers. For web page rendering, the browser uses the underlying operating system rendering API. (In this case, it's a Windows WebView2 call that utilizes the Blink rendering engine underneath.)
Our default privacy protections are stronger than what Chrome and most other browsers offer, and our engineers have spent lots of time addressing any privacy issues specific to WebView2, such as ensuring that crash reports are not sent to Microsoft. (For a more private Windows experience overall, we recommend that you disable optional diagnostic data in Windows under Settings > Privacy & security > Diagnostics & feedback > Send optional diagnostic data.)
DuckDuckGo for Windows has come a long way in this short time, and it will only keep improving from here. We’re hard at work right now on achieving full parity with the Mac browser, including improvements like faster startup performance, the ability to pin tabs, HTML bookmark import, more options for the Fire Button, and additional privacy features like Fingerprinting Protection, Link Tracking Protection, and Referrer Tracking Protection. As mentioned above, private password and bookmark syncing is also coming soon.
In the meantime, please keep the feedback coming; it helps a lot! There’s an anonymous feedback form in the app's three-dot menu, right under the Fire Button. DuckDuckGo believes in open sourcing our apps and extensions whenever possible; we ultimately plan to do so for DuckDuckGo for Windows, too.
Visit duckduckgo.com/windows to get the browser today, and stay tuned for more!
Generative artificial intelligence is hitting the world of search and browsing in a big way. At DuckDuckGo, we’ve been trying to understand the difference between what it could do well in the future and what it can do well right now. But no matter how we decide to use this new technology, we want it to add clear value to our private search and browsing experience.
DuckAssist is a new beta Instant Answer in our search results. If you enter a question that can be answered by Wikipedia into our search box, DuckAssist may appear and use AI natural language technology to anonymously generate a brief, sourced summary of what it finds in Wikipedia — right above our regular private search results. It’s completely free and private itself, with no sign-up required, and it’s available right now.
This is the first in a series of generative AI-assisted features we hope to roll out in the coming months. We wanted DuckAssist to be the first because we think it can immediately help users find answers to what they are looking for faster.
DuckAssist is available to try right now wherever you use DuckDuckGo.
DuckAssist is a new type of Instant Answer in our search results, just like News, Maps, Weather, and many others we already have. We designed DuckAssist to be fully integrated into DuckDuckGo Private Search, mirroring the look and feel of our traditional search results, so while the AI-generated content is new, we hope using DuckAssist feels second nature.
DuckAssist answers questions by scanning a specific set of sources — for now that’s usually Wikipedia, and occasionally related sites like Britannica — using DuckDuckGo’s active indexing. Because we’re using natural language technology from OpenAI and Anthropic to summarize what we find in Wikipedia, these answers should be more directly responsive to your actual question than traditional search results or other Instant Answers.
For now, DuckAssist is most likely to appear in our search results when users search for questions that have straightforward answers in Wikipedia. Think questions like “what is a search engine index?” rather than more subjective questions like “what is the best search engine?”. We are using the most recent full Wikipedia download available, which is at most a few weeks old. This means DuckAssist will not appear for questions more recent than that, at least for the time being. For those questions, our existing search results page does a better job of surfacing helpful information.
As a result, you shouldn’t expect to see DuckAssist on many of your searches yet. But the combination of generative AI and Wikipedia in DuckAssist means we can vastly increase the number of Instant Answers we can provide, and when it does pop up, it will likely help you find the information you want faster than ever.
DuckAssist joins many other Instant Answers on DuckDuckGo’s private search results
Generative AI technology is designed to generate text in response to any prompt, regardless of whether it “knows” the answer or not. However, by asking DuckAssist to only summarize information from Wikipedia and related sources, the probability that it will “hallucinate” — that is, just make something up — is greatly diminished. In all cases though, a source link, usually a Wikipedia article, will be linked below the summary, often pointing you to a specific section within that article so you can learn more.
Nonetheless, DuckAssist won’t generate accurate answers all of the time. We fully expect it to make mistakes. Because there’s a limit to the amount of information the feature can summarize, we use the specific sentences in Wikipedia we think are the most relevant; inaccuracies can happen if our relevancy function is off, unintentionally omitting key sentences, or if there’s an underlying error in the source material given. DuckAssist may also make mistakes when answering especially complex questions, simply because it would be difficult for any tool to summarize answers in those instances. That’s why it’s so important for our users to share feedback during this beta phase: there’s an anonymous feedback link next to all DuckAssist answers where you can let us know about any problems, so we can identify where things aren’t working well and take quick steps to make improvements.
DuckAssist is anonymous, with no logging in required. It’s a fully integrated part of DuckDuckGo Private Search, which is also free and anonymous. We don’t save or share your search or browsing history when you search on DuckDuckGo or use our browsing apps or browser extensions, and searches with DuckAssist are no exception. We also keep your search and browsing history anonymous to our search content partners — in this case, OpenAI and Anthropic, used for summarizing the Wikipedia sentences we identify. As with all other third parties we work with, we do not share any personally identifiable information like your IP address. Additionally, our anonymous queries will not be used to train their AI models. And anything you share via the anonymous feedback link goes to us and us alone.
If DuckAssist has already answered a question on the same topic, its response will appear automatically
We’ve used Wikipedia for many years as the primary source for our “knowledge graph” Instant Answers, and, while we know it isn’t perfect, Wikipedia is relatively reliable across a wide variety of subjects. Because it’s a public resource with a transparent editorial process that cites all the sources used in an article, you can easily trace exactly where its information is coming from. Finally, since Wikipedia is always being updated, DuckAssist answers can reflect recent understanding of a given topic: right now our DuckAssist Wikipedia index is at most a few weeks old, and we have plans to make it even more recent. We also have plans to add more sources soon; you may already see some signs of that in your results!
• Phrasing your search query as a question makes DuckAssist more likely to appear in search results.
• If you’re fairly confident that Wikipedia has the answer to your query, adding the word “wiki” to your search also makes DuckAssist more likely to appear in search results.
• If you don’t want DuckAssist to appear in search results, you can disable DuckAssist in search settings.
• If DuckAssist has generated an answer for a given topic before, the answer will appear automatically. Otherwise, you can click the ‘Generate’ button to have an answer generated for you in real time.
2022 marks DuckDuckGo's twelfth year of donations—our annual program to support organizations that share our vision of raising the standard of trust online. This year, we're proud to donate to a diverse selection of organizations across the globe that strive for better privacy, digital rights, greater competition in online markets, and access to information free from algorithmic bias.
This year, we've been able to increase our donation amount to $1,100,000, bringing the total over the past decade to $4,750,000. Everyone using the Internet deserves simple and accessible online protection; these organizations are all pushing to make that a reality. We encourage you to check out their valuable work below, alongside details about how our funds were allocated this year.
$125,000 to the Electronic Frontier Foundation (EFF)
"EFF is an essential champion of user privacy, free expression, and innovation through impact litigation, policy analysis, grassroots activism, and technology development--and has been since our founding in 1990."
$125,000 to Fight for the Future
"Fight for the Future harnesses the power of the Internet to channel outrage into action, defending our most basic rights in the digital age. They fight to ensure that technology is a force for empowerment, free expression, and liberation rather than tyranny, corruption, and structural inequality."
$125,000 to The Markup
"The Markup is a nonprofit newsroom that investigates how powerful institutions are using technology to change our society."
$125,000 to Public Knowledge
"Public Knowledge promotes freedom of expression, an open internet, and access to affordable communications tools and creative works. We work to shape policy on behalf of the public interest."
$125,000 to Signal
"Signal Technology Foundation develops open source privacy technology that protects free expression and enables secure global communication."
$25,000 to Access Now
"Access Now defends and extends the digital rights of people and communities at risk by combining direct technical support, strategic advocacy, grassroots grantmaking, and convenings such as RightsCon."
$25,000 to Algorithmic Justice League
"AJL's current mission is to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to prevent AI harms."
$25,000 to Article19
"Established in 1987, ARTICLE 19 is an international think-do organization that defends freedom of expression, fights against censorship, protects dissenting voices, and advocates against laws and practices that silence individuals, both online and offline."
$25,000 to the Australia Institute's Centre for Responsible Technology
"The Australia Institute’s Centre for Responsible Technology develops public policy and research that advocate for a fairer and healthier online experience and gives back agency to individuals in our networked world."
$25,000 to Bits of Freedom
"Bits of Freedom shapes internet policy in the Netherlands and Brussels through advocacy, campaigning and litigation, because we believe in an open and just society, in which people can hold power accountable and effectively question the status quo."
$25,000 to the British Institute for International and Comparative Law
"The Competition Law Forum is a centre of excellence for European competition and antitrust policy and law at the British Institute of International and Comparative Law (BIICL)."
$25,000 to the Center for Critical Internet Inquiry
“C2i2 is a critical internet studies research center and community, committed to social justice, policy and human rights.”
$25,000 to the Detroit Community Technology Project (DCTP)
"Detroit Community Technology Project builds healthy digital ecosystems by training Digital Stewards and supporting the development of community governed internet networks."
$25,000 to European Digital Rights (EDRi)
"The EDRi network is a dynamic and resilient collective of NGOs, experts, advocates and academics working to defend and advance digital rights across the continent - for almost two decades, it has served as the backbone of the digital rights movement in Europe."
$25,000 to Freiheitsrechte (GFF)
"The GFF (Gesellschaft für Freiheitsrechte / Society for Civil Rights) is a Berlin-based non-profit NGO founded in 2015. Its mission is to establish a sustainable structure for successful strategic litigation in the area of human and civil rights in Germany and Europe."
$25,000 to the Internet Economy Foundation (IE.F)
"The IE.F is an independent think-tank based in Berlin that is dedicated to ensuring fair competition in the Internet economy and fostering a vibrant European digital ecosystem."
$25,000 to OpenMedia
"OpenMedia works to keep the Internet open, affordable, and surveillance-free. We create community-driven campaigns to engage, educate, and empower people to safeguard the Internet."
$25,000 to the Open Rights Group
"Open Rights Group (ORG) is a UK-based digital campaigning organisation working to protect our rights to privacy and free speech online."
$25,000 to the Open Source Technology Improvement Fund (OSTIF)
"OSTIF, or The Open Source Technology Improvement Fund, is a corporate non-profit dedicated to improving the security of critical open-source projects. This is done mainly by facilitating and managing security reviews and associated work for projects and organizations. In the last year, OSTIF was responsible for the identifying and fixing of more than 50 critical and high severity vulnerabilities and 250 more bug fixes in widely adopted projects."
$25,000 to Privacy Rights Clearinghouse
"Privacy Rights Clearinghouse works to make data privacy more accessible to all by empowering people and advocating for positive change."
$25,000 to Restore the Fourth
"Restore the Fourth is a grassroots, volunteer-run, nonpartisan civil liberties group that opposes mass government surveillance, protects privacy, and promotes the Fourth Amendment."
$25,000 to the Surveillance Technology Oversight Project (STOP)
"The Surveillance Technology Oversight Project (S.T.O.P.) advocates and litigates for privacy, working to abolish local governments’ systems of discriminatory mass surveillance."
$25,000 to the Technology Oversight Project
"Through engaging with lawmakers, exposing false narratives and bad actors, and pushing for landmark legislation, The Tech Oversight Project seeks to hold tech giants accountable for their anti-competitive, corrupting, and corrosive influence on our society and the levers of power."
$25,000 to the Tor Project
"At the Tor Project, we believe everyone should be able to explore the internet with privacy. We advance human rights and defend your privacy online through free, open source software and the decentralized Tor network."
Update: As of December 14, 2023 App Tracking Protection for Android is out of beta.
App Tracking Protection for Android is launching into open beta today. It's a free feature in the DuckDuckGo Android app that helps block 3rd-party trackers in the apps on your phone (like Google snooping in your weather app) – meaning more comprehensive privacy and less creepy targeting.
With the App Tracking Protection 'Activity Report', you can see which 3rd-parties are trying to track you.
You may have heard of Apple’s App Tracking Transparency (ATT), a feature for iPhones and iPads that asks users whether they want to allow third-party app tracking or not in each of their apps (with the majority of people choosing “not”). But most smartphone users worldwide actually use Android. So, we’re offering Android users something even more powerful: enable our App Tracking Protection and we'll automatically block all the hidden trackers we can identify as blockable across your apps.
App Tracking Protection beta users have been surprised to see how many tracking attempts the feature is blocking.
The average Android user has 35 apps on their phone. Through our testing, we’ve found that a phone with 35 apps can experience between 1,000-2,000 tracking attempts every day and contact 70+ different tracking companies.
Imagine you’re spending a lazy Sunday afternoon playing around with apps on your phone; keeping an eye on flight prices for a getaway (Southwest Airlines app), checking out a house your friend has been raving about (Zillow app), seeing if those concert tickets have gone on sale yet (SeatGeek app), and checking the weather (Weather Network app).
Within these four apps alone, 45+ tracking companies are known to collect personal data like your precise location, email address, phone number, time zone, and a fingerprint of your device (like screen resolution, device make and model, language, local internet provider, etc.) that can be used to identify you. With App Tracking Protection, you can now see exactly what the trackers are typically trying to collect, which we're helping block from happening.
In the Android app, when you use App Tracking Protection, you can see the personal data we're blocking 3rd-party trackers from getting.
But what are they doing with all that information? Personal data companies like Facebook and Google use that information to build a profile that advertisers and content-targeting companies use to influence what you see online.
You could get ads about your mom’s toothpaste brand after spending time at her house (no, not a coincidence – check out this thread), be bombarded with pregnancy-related ads and content after pregnancy loss or see drug-related ads or articles about diseases you learned about on WebMD. The examples are endless. It can feel like you're being listened to, but in reality it’s not that someone is listening to your conversations, it's that your activity is being relentlessly tracked and analyzed!
The problems with all this information collection go way beyond so-called “relevant” (aka creepy) advertising and targeting. Tracking networks can sell your data to other companies like data brokers, advertisers, and governments, resulting in more substantial harms like ideological manipulation, discrimination, personal price manipulation, polarization, and more.
DuckDuckGo for Android, our all-in-one privacy solution, can help. Our app was already protecting you across search, browsing, and email. Now, with App Tracking Protection, you’re getting a lot of protection from 3rd-party app trackers, too.
When App Tracking Protection is enabled, it will detect when other apps on your phone are about to send data to any of the 3rd-party tracking companies in our app tracker dataset, and block most of those requests. And that’s it! You can continue to use your apps as usual, and App Tracking Protection works in the background to block trackers whenever it finds them, even while you sleep.
The DuckDuckGo app on Android also offers a real-time view of App Tracking Protection’s results, including which tracking network is associated with each app and what data they're known to collect. If you have notifications on, you’ll also get automatic summaries if you want them.
To keep you up-to-date, we send automatic summaries about the app tracker blocking happening behind the scenes.
App Tracking Protection uses a local “VPN connection,” which means that it works its magic right on your smartphone and without sending app data to DuckDuckGo or other remote servers. That is, App Tracking Protection does not route your app data through external companies (including ours).
As we work through the beta phase, there are a small number of apps being excluded because they rely on tracking to work properly, like browsers and apps with in-app browsers. Throughout the waitlist period, we've reduced this number by half and also dropped the exclusion for games. We look forward to reducing this list even more.
To send us general feedback or report issues with the DuckDuckGo app: open Settings > Share Feedback (in the Other section). If you run into issues with another app on your smartphone as a result of App Tracking Protection, you can disable protection for just that app under "Having Problems With An App". You'll then be asked to give details of the problem you experienced. Your feedback greatly helps our team continue improving App Tracking Protection and we appreciate it!
To get access to the beta of App Tracking Protection, find it in your settings.
Signing up is easy! Here are four of the simple steps to automatic app tracker blocking.
Forget going “incognito” with other browsers that don’t actually deliver substantive web tracking protection; you deserve privacy all the time, with built-in protections that make the Internet less creepy and less cluttered. Equipped with new and improved features for everyday use, DuckDuckGo for Mac is here to clean up the web as you browse. (And yes, you can import all your passwords and bookmarks from other browsers and password managers – so switching is quick and easy!)
The privacy protections built into DuckDuckGo for Mac add up to a better user experience; by blocking trackers before they load, for example, DuckDuckGo for Mac uses about 60% less data than Chrome. The desktop app includes the built-in privacy protections you know and trust from our mobile apps – which now see over 50M downloads a year – including multiple layers of defense against third-party trackers, secure link upgrading with Smarter Encryption, and our Fire Button to instantly clear recent browsing data. An all-in-one app that aims to be the “easy button” for privacy, DuckDuckGo for Mac has no fiddly privacy settings to adjust – our foundational protections are on by default, so you can get back to browsing.
Since announcing the waitlist beta in April, we’ve been listening to beta testers’ feedback and making even more improvements to meet your needs. We added a bookmarks bar, pinned tabs, and a way to view your locally stored browsing history. Our Cookie Consent Pop-Up Manager can now handle cookie pop-ups on significantly more sites, automatically choosing the most private option and sparing you from annoying interruptions.
Keep pop-ups at bay with our automatic cookie consent manager
The app also lets you activate DuckDuckGo Email Protection on desktop, protecting your inbox with email tracker blocking and private @duck.com addresses. While we work on browser extension support that meets our high standards of privacy and quality, we’re building in more features that meet the same needs as the most popular extensions: ad-blocking and secure password management. These new features will become available across our other platforms in the near future.
Cleaning up YouTube with Duck Player – fewer creepy ads, fewer distractions: Want a more-private way to watch YouTube videos in peace? Duck Player protects you from targeted ads and cookies with a distraction-free interface that incorporates YouTube’s strictest privacy settings for embedded video. Any ads you see within Duck Player will not be personalized; in our testing, this prevented ads on most videos altogether. YouTube still registers your views, so it’s not totally anonymous, but none of the videos you watch in Duck Player contribute to your YouTube advertising profile or suggest distracting personalized recommendations. The feature can be always-on, ready to go whenever you click a YouTube link, or you can opt in on specific videos – perfect for when you’re sharing your screen, using a shared device, or just trying to stay focused. It’s equally easy to get back to the default version of YouTube whenever you want.
Open YouTube links in Duck Player for more-private viewing
Eliminating invasive ads as you browse: DuckDuckGo for Mac has always blocked invasive trackers before they load, effectively eliminating the ads that rely on that creepy tracking. (Because so many ads work that way, you’ll see way fewer ads.) Today, we’ve made another big improvement: we’re cleaning up the whitespace left behind by those ads for an efficient, distraction-free look without the need for a separate ad blocker.
More choices for secure password management: Our browser includes our own secure and easy-to-use password manager that can automatically remember and fill in login credentials and suggest random passwords for new logins. (It can also securely save addresses and payment methods.) Our autofill experience is continually improving and will roll out on our mobile apps soon.
This works for most users, especially since you can import passwords. But we understand some folks want to continue using third-party password management across browsers and devices. So, we’ve teamed up with Bitwarden, the accessible open-source password manager, in the first of what we hope to be several similar integrations. In the coming weeks, Bitwarden users will be able to activate this seamless two-way integration in their browser settings. DuckDuckGo for Mac is also compatible with 1Password’s new universal autofill feature.
Easily autofill your Bitwarden passwords in DuckDuckGo for Mac
“The DuckDuckGo browser has been a breath of fresh air, a lightweight and snappy browser that isn't a gamified gimmick and doesn’t sell my browsing history to advertisers. Its clean and familiar UI allowed me to switch with no hassle. I would definitely recommend more people switching as soon as they can.”
“The automatic cookie settings feature is awesome!!!”
“I love the UI of this app! Very clean and minimalist. Also, it really is blazing fast. I appreciate the careful consideration into design and performance with the use of the internal rendering engine. Thank you for all your work!”
“DuckDuckGo is replacing Google Chrome on my Mac and I love it.”
“I’ve been using [DuckDuckGo for Mac] for several months and I have to say, I love the simplicity and privacy. We’ve tossed a lot of stuff into browsers over the years to get privacy and speed. This achieves both with much less.”
We built DuckDuckGo for Mac with privacy, security, and simplicity in mind. Our default privacy settings are stronger than what most other browsers offer, and you don’t need to sift through obscure menus to turn them on. DuckDuckGo for Mac is not a “fork” of Chromium, or any other browser code. All the app code – tab and bookmark management, our new tab page, our password manager, etc. – is written by our own engineers. For rendering, it uses a public macOS API, making it super compatible with Mac devices. DuckDuckGo believes in open sourcing our apps and extensions whenever possible, and we plan to do so for DuckDuckGo for Mac before it moves out of beta.
We’re proud of how far DuckDuckGo for Mac has come in this short time, and it will only get better from here! Users will soon be able to sync DuckDuckGo bookmarks and passwords across devices. We’ll also be adding more built-in features that offer native alternatives to more popular extensions. Please keep the feedback coming; we're listening! (You can find the feedback form in the app's three-dot menu, right under the Fire Button.)
Before you ask, yes, our Windows browser is still on the way! DuckDuckGo for Windows is in an early friends and family beta, with a private waitlist beta expected in the coming months. (Right now, Mac and Windows are the only desktop platforms we’re focusing on.) Stay tuned for updates. And if you’re interested in working on our desktop apps, we’re hiring remotely, worldwide.
On Tuesday September 13th, 13 privacy-focused technology companies representing more than 100 million users in the United States published a letter to U.S. Congressional Leadership imploring them to support the American Innovation and Choice Online Act (AICOA) and bring it to a floor vote as soon as possible.
Incessant data collection and tech monopolies are inherently linked: the more data they collect and use to influence user decision making, the stronger their grip on industry becomes, leaving users feeling like they have no option but to accept a lack of privacy to use the Internet. However, users do have choices when it comes to the services they use, and they do not have to accept services that have made it their business to abuse user privacy. If the American Innovation and Choice Online Act (AICOA) becomes law, millions of Americans will have better access to Internet services with more privacy and less data-driven targeting and manipulation.
U.S. Senator Chuck Schumer U.S. Senator Mitch McConnell
Senate Majority Leader Senate Minority Leader
U.S. Senator Dick Durbin U.S. Senator John Thune
Senate Majority Whip Senate Minority Whip
U.S. Representative Nancy Pelosi U.S. Representative Kevin McCarthy
Speaker of the House House Minority Leader
U.S. Representative Steny Hoyer U.S. Representative Steve Scalise
House Majority Leader House Minority Whip
RE: Support for S. 2992/H.R. 3816, The American Innovation and Choice Online Act.
Dear U.S. Congressional Leadership:
We, the undersigned privacy companies and organizations, urge Congress to schedule floor votes for the American Innovation and Choice Online Act (AICOA) as soon as possible. This bill has been delayed for far too long and the American public deserves the kind of innovative online ecosystem it would create.
Our companies and organizations offer privacy protective alternatives to the services provided by dominant technology companies. While more and more Americans are embracing privacy-first technologies, some dominant firms still use their gatekeeper power to limit competition and restrict user choice. We implore you to pass AICOA as it would remove barriers for consumers to freely select privacy protective services.
Massive tech platforms can exert influence over society and the digital economy because they ultimately have the power to collect, analyze, and monetize exorbitant amounts of personal information. This is not by accident, as some of the tech giants have intentionally abused their gatekeeper positions to lock users into perpetual surveillance while simultaneously making it difficult to switch to privacy-protective alternatives. These monopolist firms: use manipulative design tactics to steer individuals away from rival services; restrict the ability of competitors to interoperate on the platform; use non-public data to benefit their services or products; and make it impossible or complicated for users to change their default settings or uninstall apps. Such tactics deprive consumers of the innovative offerings an open and vibrant market would yield.
Passage of AICOA is critical to protecting the privacy of American consumers. These self-preferencing tactics keep consumers stuck in an ecosystem of constant tracking by making it needlessly difficult for users to choose alternative privacy-respecting products and services. This is not how a truly free market operates, which is why commonsense reforms are necessary to combat the most egregious anticompetitive tactics and spur innovation that will increase the options available to American consumers. That’s why we support the AICOA and ask that it be scheduled for a vote. The AICOA will improve the internet in many ways and, most importantly, remove barriers that have been erected to block Americans from enjoying more privacy online.
Sincerely,
Andi
Brave
Disconnect
DuckDuckGo
Efani Secure Mobile
Fathom Analytics
Malloc
Mozilla
Neeva
Proton
Skiff
Thexyz Inc.
Tutanota
You.com
[Post updated December 19th, 2022 to reflect the addition of Skiff.]
Have you ever entered your email for a loyalty program or coupon and started getting emails from companies you didn’t subscribe to? Or noticed ads following you around after clicking on an email link? You’re not alone! There are multiple ways companies can use your email to track you, target you with ads, and influence what you see online. They can even share your personal information with third parties – all without your knowledge.
Companies embed trackers in images and links within email messages, letting them collect information like when you’ve opened a message, where you were when you opened it, and what device you were using. In our closed Email Protection beta, we found that approximately 85% of beta testers’ emails contained hidden email trackers! Very sneaky. Companies can use this information to build a profile about you.
And because your email addresses are connected to so much of what you do online – making purchases, using social media, and more – tracking companies can also effectively use your personal email address as a profiling identifier. In fact, many companies are so hungry for your personal email address that they’ll actually pull it from online forms you haven’t even submitted yet! Beyond sending you more emails, companies often upload your email address to Facebook and Google to target you with creepy ads across apps and websites.
DuckDuckGo Email Protection is a free email forwarding service that removes multiple types of hidden email trackers and lets you create unlimited unique private email addresses on the fly. You can use Email Protection with your current email provider and app – no need to update your contacts or juggle multiple accounts. Email Protection works seamlessly in the background to deliver your more-private emails right to your inbox.
Signing up for Email Protection gives you the ability to create Duck Addresses. There are two types that help protect your email privacy:
Many users have loved the Email Protection beta so far, with millions of more-private emails being forwarded weekly. It’s email privacy, simplified – and we’re thrilled to open the beta for everyone to try it out!
Since launching DuckDuckGo Email Protection into private waitlist beta, we’ve been continuously making improvements based on feedback.
Link Tracking Protection: In addition to blocking trackers in images, scripts, and other media directly embedded in emails, we can now detect and remove a growing number of the trackers embedded in email links.
Smarter Encryption: We’ve started using the same Smarter Encryption (HTTPS Upgrading) that’s at work in our search engine and apps to upgrade insecure (unencrypted, HTTP) links in emails to secure (encrypted, HTTPS) links when they’re on our upgradable list.
Replying from your Duck Addresses: You can now reply to emails from all your Duck Addresses. When you get an email to a Duck Address, you can just hit ‘Reply,’ type your message, and send it off. Your email will then be delivered from your Duck Address instead of your personal address.
Self-Service Dashboard: Want to update your forwarding address? Or even delete your account? You can now make changes to your Duck account whenever you want, saving you time and effort.
Wondering how this feature works in the real world? Here’s what our beta testers had to say:
Email Protection is supported in the DuckDuckGo Privacy Browser for iOS and Android, DuckDuckGo for Mac (beta), and DuckDuckGo Privacy Essentials browser extensions for Firefox, Chrome, Edge, and Brave.
Once you follow the steps to create your personal Duck Address, you’re all set to start using it right away! And while browsing, look for Dax the Duck (our mascot) to help you autofill your personal Duck Address or generate a private Duck Address for you on the fly.
Like all our features, DuckDuckGo Email Protection will never track you. We believe that your emails are none of our business! When your Duck Addresses receive an email, we immediately apply our tracking protections and then forward it to you, never saving it on our systems. Sender information, subject lines...we don’t track any of it. (Learn more in our Email Protection Privacy Policy and Terms of Service.)
Additionally, we are committed to Email Protection for the long term, so you can feel confident about using your Duck Addresses. During the private beta, we’ve been shoring up our backend systems to support millions of users. And as we move out of beta, we'll also be incorporating our email tracker dataset into our open source Tracker Radar.
So give Email Protection a try and let us know what you think! We look forward to helping you protect your inbox.
Our vision at DuckDuckGo is to raise the standard of trust online. Raising that standard means maximizing the privacy we offer by default, being transparent about how our privacy protections work, and doing our best to make the Internet less creepy. Recently, I’ve heard from a number of users and understand that we didn’t meet their expectations around one of our browser’s web tracking protections. So today we are announcing more privacy and transparency around DuckDuckGo’s web tracking protections.
Over the next week, we will expand the third-party tracking scripts we block from loading on websites to include scripts from Microsoft in our browsing apps (iOS and Android) and our browser extensions (Chrome, Firefox, Safari, Edge and Opera), with beta apps to follow in the coming month. This expands our 3rd-Party Tracker Loading Protection, which blocks identified tracking scripts from Facebook, Google, and other companies from loading on third-party websites, to now include third-party Microsoft tracking scripts. This web tracking protection is not offered by most other popular browsers by default and sits on top of many other DuckDuckGo protections. We explain how this works differently with DuckDuckGo advertising below.
Websites often embed scripts from other companies (commonly called “third-party scripts”) that automatically load when you visit their site. For example, the most prevalent third-party script is Google Analytics, which helps websites understand how their sites are being used. But typically Google can also use this information to profile you outside of the site where the information originated. Most browsers’ default tracking protection focuses on cookie and fingerprinting protections that only restrict third-party tracking scripts after they load in your browser. Unfortunately, that level of protection leaves information like your IP address and other identifiers sent with loading requests vulnerable to profiling. Our 3rd-Party Tracker Loading Protection helps address this vulnerability, by stopping most 3rd-party trackers from loading in the first place, providing significantly more protection.
Previously, we were limited in how we could apply our 3rd-Party Tracker Loading Protection on Microsoft tracking scripts due to a policy requirement related to our use of Bing as a source for our private search results. We’re glad this is no longer the case. We have not had, and do not have, any similar limitation with any other company.
Microsoft scripts were never embedded in our search engine or apps, which do not track you. Websites insert these scripts for their own purposes, and so they never sent any information to DuckDuckGo. Since we were already restricting Microsoft tracking through our other web tracking protections, like blocking Microsoft’s third-party cookies in our browsers, this update means we’re now doing much more to block trackers than most other browsers.
Advertising on DuckDuckGo is done in partnership with Microsoft. Viewing ads on DuckDuckGo is anonymous, and Microsoft has committed to not profile our users on ad clicks: “when you click on a Microsoft-provided ad that appears on DuckDuckGo, Microsoft Advertising does not associate your ad-click behavior with a user profile. It also does not store or share that information other than for accounting purposes.”
To evaluate whether an ad on DuckDuckGo is effective, advertisers want to know if their ad clicks turn into purchases (conversions). To see this within Microsoft Advertising, they use Microsoft scripts from the bat.bing.com domain. Currently, if an advertiser wants to detect conversions for their own ads that are shown on DuckDuckGo, 3rd-Party Tracker Loading Protection will not block bat.bing.com requests from loading on the advertiser’s website following DuckDuckGo ad clicks, but these requests are blocked in all other contexts. For anyone who wants to avoid this, it's possible to disable ads in DuckDuckGo search settings.
To eventually replace the reliance on bat.bing.com for evaluating ad effectiveness, we’ve started working on an architecture for private ad conversions that can be externally validated as non-profiling. DuckDuckGo isn’t alone in trying to solve this issue; Safari is working on Private Click Measurement (PCM) and Firefox is working on Interoperable Private Attribution (IPA). We hope these efforts can help move the entire digital ad industry forward to making privacy the default. We think this work is important because it means we can improve the advertising-based business model that countless companies rely on to provide free services, making it more private instead of throwing it out entirely.
Our browser extensions and non-beta apps are already open source, as is our Tracker Radar – the data set of trackers and other third-party web activity we identify through crawling. We’ve now also made our tracker protection list publicly available, so folks can see for themselves what we’re blocking and report any issues. We’ve also updated the Privacy Dashboard within our apps and extensions to show more information about third-party requests. Using the updated Privacy Dashboard, users can see which third-party requests have been blocked from loading and which other third-party requests have loaded, with reasons for both when available.
To further deliver on our commitment to transparency, we’ve posted a new help page that offers a comprehensive explanation of all the web tracking protections we provide across platforms. Users now have one place to look if they want to understand the different kinds of web privacy protections we offer on the platforms they use. This page also explains how different web tracking protections are offered based on what is technically possible on each platform, as well as what’s in development for this part of our product roadmap.
I’ve been building DuckDuckGo as an independent company for almost 15 years. After all this time, I believe more than ever that the majority of people online would choose to be more private if they could press a privacy “easy button.” That’s why our product vision is to pack as much privacy as we can into one package. We’re committed for the long haul to make simple privacy protection available to all, and will continue striving to strengthen the quality, understanding, and confidence in our product.
Governments, researchers, and policy makers need accurate market share data to evaluate search engine market diversity (or lack thereof). As explained by our series of posts on search engine choice screens (also known as preference menus), a well-designed choice screen could significantly increase competition and give users meaningful choice and control. However, without accurate search market share data, it is difficult to assess whether a particular choice screen is effective overall or to ensure consumers are presented with the search engines they want to use.
Common sources of search market share data, like the often-cited comScore and Statcounter, vary significantly for non-Google search engines which creates confusion around search engine market share. Additionally, both these and other commonly cited sources have significant methodological deficiencies. In short, comScore suffers from panel selection bias, e.g., privacy-conscious users are unlikely to agree to be surveilled by comScore and Statcounter’s core flaw is that it uses trackers, which are often blocked by tracker-blocking tools, either by search engine apps and extensions (like ours) or by other common apps and browser extensions. And both comScore and Statcounter reports are further flawed because they either do not report and/or do not have a sufficiently large and representative sample of users across all major markets and platforms.
Recently, two new market share reports were released by Cloudflare and Wikipedia respectively. Unlike comScore, Cloudflare’s and Wikipedia’s reports do not suffer from panel selection bias since they are not based on panels but instead based on traffic referred to Cloudflare-hosted websites and Wikipedia, respectively. And unlike Statcounter, this method also means Cloudflare’s and Wikipedia’s data is not affected by tracker-blocking tools. While Wikipedia is just one site, Cloudflare’s report is based on a large swath of the global Internet (25% of the top million websites use Cloudflare) so sample size isn’t a problem.
For these reasons, we recommend Cloudflare's report as currently the best source for baseline assessments of search engine market share and for assessing the effect of competition interventions like search preference menus. Wikipedia’s report is also useful because it can be analyzed in unique ways (more on both reports below). However, despite the methodological differences between all these reports, all still show that Google dominates the search engine market.
Cloudflare’s search market share report
Cloudflare's report is based on referrer data from search engine link clicks. When you click on a link from a search engine and visit that website, the site will know which search engine domain the user came from (using referrer information, e.g., duckduckgo.com). This report is made possible through Cloudflare Radar, a free public tool that lets anyone view global traffic as well as security trends and insights across the Internet as they happen. Cloudflare Radar is powered by the aggregated traffic flowing through the Cloudflare network. Radar insights like these are created by looking at patterns derived from aggregated data that has been anonymized, and so does not contain any search queries or personal information. (To be clear, that means that if you click on a link for a Cloudflare-supported site from DuckDuckGo, your referrer information does not reveal your search query or any personal information about you.)
Cloudflare’s report is updated quarterly, and the report can be split by operating system, device type, country, and month.
Wikipedia’s search market report
Wikipedia also recently published their search engine traffic data using a similar methodology. Every day Wikipedia counts link clicks from search engines and aggregates them into the search market share dashboard (also using direct referral data in a private manner).
We recommend Wikipedia’s data for more granular insights because their dashboard can be split in more ways, including by language, operating system, device type, and country, down to the day.
However, we recommend Cloudflare’s data to support higher-impact decisions because Wikipedia is just one site, whereas Cloudflare is based on millions of sites. While Wikipedia’s data is dependent on to what extent search engines include Wikipedia in their knowledge panels and in their search engine results, Cloudflare’s sample is so large that per-site effects are minimized.
In fact, we now believe Cloudflare’s report is by far the most accurate one of all search engine market share reports out there. With it, governments, researchers, and policy makers can better understand the search engine market and the effect of tools like search choice screens.
The search engine and browser you use should be a personal choice, but right now it's often too complicated to switch away from gatekeeper defaults. So in an open letter to the companies, consumer organizations, and regulators with the power to create effective user choice screens, the CEOs of DuckDuckGo and Ecosia, and Qwant's President published a set of common-sense principles to improve this user experience online. This letter coincides with the final adoption of the EU's Digital Markets Act by the European Parliament this week.
Open Letter from DuckDuckGo, Ecosia, and Qwant
Choice screens and effective switching mechanisms are crucial tools that empower users and enable competition in the search engine and browser markets. The European Union (EU) has taken an important first step by adopting the Digital Markets Act (DMA), which includes obligations to implement such tools. However, the effectiveness of the EU’s mandates and related regulatory efforts across the globe will depend on how gatekeepers implement changes to comply with these new rules.
Without strict adherence to both clear rules and principles for fair choice screens and effective switching mechanisms, gatekeeping firms could choose to circumvent their legal obligations. We suggest regulators make clear their enforcement should adhere to the following ten essential principles for fair choice screens and effective switching mechanisms:
Gatekeeping firms should globally roll out fair choice screens and effective switching mechanisms now, using these principles. We are ready to work collaboratively towards this end, honoring the users‘ desire to choose the services they want to use, and not having those choices decided for them by default.
SIGNATORIES
In case you missed it: Find our series of blogs on search choice here.
If you're a Google Chrome user, you might be surprised to learn that you may soon be automatically entered into Google's new tracking and ad targeting methods called Topics and FLEDGE. Topics uses your Chrome browsing history to automatically collect information about your interests to share with other businesses, tracking companies and websites without your knowledge. FLEDGE enables your Chrome browser to target you with ads based on your browsing history. These new methods enable creepy advertising and other content targeting without third-party cookies. While Google is positioning this as more privacy respecting, the simple fact is tracking, targeting, and profiling, still is tracking, targeting, and profiling, no matter what you want to call it.
1. Don't use Google Chrome! Google Topics and FLEDGE will only exsist in Google Chrome. On iOS or Android we suggest you use our DuckDuckGo mobile browser, which offers best-in-class privacy protection by default when searching and browsing. Plus, we recently launched more app features into beta that will better protect your online privacy, like Email Protection and App Tracking Protection for Android. On desktop, we just launched the DuckDuckGo app for Mac into beta (Windows coming soon) so you can skip the Chrome headache completely and use ours by joining our waitlist (which is moving quickly).
2. Install the DuckDuckGo Chrome extension. In response to Google automatically turning on Topics and FLEDGE in Chrome, we've enhanced our Chrome extension to block Topics and FLEDGE interactions on websites, stopping these new forms of targeting. This is in addition to the all-in-one privacy protection that our extension offers, including private search, tracker blocking, Smarter Encryption, and Global Privacy Control. The Topics and FLEDGE blocking addition is included as of version 2022.4.18 which should auto-update, though you can also check the version you have installed from the extensions list within Chrome. For non-Chrome desktop browsers, you can get our extension here.
3. Change your Chrome and Google settings, which we recommend you do regardless if you continue to use Chrome or Google.
Note that even if you change these settings, we also recommend installing the DuckDuckGo Chrome extension to get more privacy protection than possible using Chrome settings alone.
In 2021, Google reluctantly signaled it would follow other browsers to forbid the use of third-party cookies by default, though it recently delayed doing so to at least 2023. Unlike other browsers, however, instead of just dropping third-party cookies, they are trying to replace them with alternative tracking mechanisms that are just as creepy and privacy invasive.
They first implemented a new tracking method in Chrome called Federated Learning of Cohorts (FLoC). FLoC was automatically turned on for millions of Google users who were not even given the chance to opt-out. This was understandably met with widespread criticism from privacy experts. To address the situation, we voiced our concerns and immediately enhanced our tracker blocking so that our Chrome extension would protect you from FLoC.
In response, Google announced it's ending FLoC and replacing it with yet another tracking method called Topics. Like FLoC, Topics will automatically use your browsing history to infer your interests in topics (e.g., “Child Internet Safety”, “Personal Loans”, etc.). While FLoC automatically shared a cohort identifier (for a group of people with correlated interests or demographics) with websites and tracking companies, Topics will automatically share a subset of your inferred interests, which these companies can then use to target ads and content at you.
While some suggest that Topics is a less invasive way of ad targeting, we don't agree. Why not? Fundamentally it’s because, by default, Google Chrome will still be automatically surveilling your online activity and sharing information about you with advertisers and other parties so they can behaviorally target you without your consent. This targeting, regardless of how it's done, enables manipulation (ex. exploiting personal vulnerabilities), discrimination (ex. people not seeing job opportunities based on personal profiles), and filter bubbles (ex. creating echo chambers that can divide people) that many people would like to avoid. Google says that users will be able to go in and delete “Topics” they don’t want shared, but Google knows full well that people rarely change default settings, plus the company routinely puts “dark patterns” in the way of users changing these settings, and is therefore making it needlessly difficult for people to take control over their privacy. Privacy should be the default.
In addition, the implementation of Topics presents a bunch of other privacy problems, including:
You know those ads that seem to follow you around onto every website you visit, long after looking something up online? Known as “re-targeting”, these ads are shown to you based on your browsing history from other websites, stored in third-party cookies. With the planned removal of third-party cookies Google decided to also introduce FLEDGE, a new method of re-targeting that similarly moves Google ad technology directly into the Chrome browser.
When you visit a website where the advertiser may want to later follow you with an ad, the advertiser can tell your Chrome browser to put you into an interest group. Then, when you visit another website which displays ads, your Chrome browser will run an ad auction based on your interest groups and target specific ads at you. So much for your browser working for you!
People are, by and large, vehemently against ad re-targeting and find it invasive and creepy. Because your browsing history is used to target you, just like Topics it opens you up to the same type of manipulation, discrimination, and potential embarrassment from highly personal ads being shown via your browser, and also operates without your consent.
For all of the above reasons and more, DuckDuckGo has enhanced the tracker blocking for our Privacy Essentials Chrome extension to block Google Topics and FLEDGE. This is directly in line with the extension's purpose of protecting your privacy holistically as you use Chrome, without any of the complicated settings. It's privacy, simplified.
ZASTRZEŻENIE:
To consolidate all of our security intelligence and news in one location, we have migrated Naked Security to the Sophos News platform.
It took six months for notifications to start, and we still don't know exactly what went down... but here's our advice on what to do.
Latest episode - listen now! Full transcript inside...
Imagine if you clicked on a harmless-looking image, but an unknown application fired up instead...
Cryptography isn't just about secrecy. You need to take care of authenticity (no imposters!) and integrity (no tampering!) as well.
WYSIWYG is short for "what you see is what you get". Except when it isn't...
Celebrating the true crypto bros. Listen now (full transcript available).
Apps on your iPhone must come from the App Store. Except when they don't... we explain what to look out for.
The rise of tap-to-pay and chip-and-PIN hasn't rid the world of ATM card skimming criminals...
The site was running from 2014 and allegedly raked in more than $20m, which the DOJ is seeking to claw back...
ZASTRZEŻENIE:
For nearly a dozen years, residents of South Carolina have been kept in the dark by state and federal investigators over who was responsible for hacking into the state’s revenue department in 2012 and stealing tax and bank account information for 3.6 million people. The answer may no longer be a mystery: KrebsOnSecurity found compelling clues suggesting the intrusion was carried out by the same Russian hacking crew that stole of millions of payment card records from big box retailers like Home Depot and Target in the years that followed.
Questions about who stole tax and financial data on roughly three quarters of all South Carolina residents came to the fore last week at the confirmation hearing of Mark Keel, who was appointed in 2011 by Gov. Nikki Haley to head the state’s law enforcement division. If approved, this would be Keel’s third six-year term in that role.
The Associated Press reports that Keel was careful not to release many details about the breach at his hearing, telling lawmakers he knows who did it but that he wasn’t ready to name anyone.
“I think the fact that we didn’t come up with a whole lot of people’s information that got breached is a testament to the work that people have done on this case,” Keel asserted.
A ten-year retrospective published in 2022 by The Post and Courier in Columbia, S.C. said investigators determined the breach began on Aug. 13, 2012, after a state IT contractor clicked a malicious link in an email. State officials said they found out about the hack from federal law enforcement on October 10, 2012.
KrebsOnSecurity examined posts across dozens of cybercrime forums around that time, and found only one instance of someone selling large volumes of tax data in the year surrounding the breach date.
On Oct. 7, 2012 — three days before South Carolina officials say they first learned of the intrusion — a notorious cybercriminal who goes by the handle “Rescator” advertised the sale of “a database of the tax department of one of the states.”
“Bank account information, SSN and all other information,” Rescator’s sales thread on the Russian-language crime forum Embargo read. “If you purchase the entire database, I will give you access to it.”
A week later, Rescator posted a similar offer on the exclusive Russian forum Mazafaka, saying he was selling information from a U.S. state tax database, without naming the state. Rescator said the data exposed included Social Security Number (SSN), employer, name, address, phone, taxable income, tax refund amount, and bank account number.
“There is a lot of information, I am ready to sell the entire database, with access to the database, and in parts,” Rescator told Mazafaka members. “There is also information on corporate taxpayers.”
On Oct. 26, 2012, the state announced the breach publicly. State officials said they were working with investigators from the U.S. Secret Service and digital forensics experts from Mandiant, which produced an incident report (PDF) that was later published by South Carolina Dept. of Revenue. KrebsOnSecurity sought comment from the Secret Service, South Carolina prosecutors, and Mr. Keel’s office. This story will be updated if any of them respond. Update: The Secret Service declined to comment.
On Nov. 18, 2012, Rescator told fellow denizens of the forum Verified he was selling a database of 65,000 records with bank account information from several smaller, regional financial institutions. Rescator’s sales thread on Verified listed more than a dozen database fields, including account number, name, address, phone, tax ID, date of birth, employer and occupation.
Asked to provide more context about the database for sale, Rescator told forum members the database included financial records related to tax filings of a U.S. state. Rescator added that there was a second database of around 80,000 corporations that included social security numbers, names and addresses, but no financial information.
The AP says South Carolina paid $12 million to Experian for identity theft protection and credit monitoring for its residents after the breach.
“At the time, it was one of the largest breaches in U.S. history but has since been surpassed greatly by hacks to Equifax, Yahoo, Home Depot, Target and PlayStation,” the AP’s Jeffrey Collins wrote.
As it happens, Rescator’s criminal hacking crew was directly responsible for the 2013 breach at Target and the 2014 hack of Home Depot. The Target intrusion saw Rescator’s cybercrime shops selling roughly 40 million stolen payment cards, and 56 million cards from Home Depot customers.
Who is Rescator? On Dec. 14, 2023, KrebsOnSecurity published the results of a 10-year investigation into the identity of Rescator, a.k.a. Mikhail Borisovich Shefel, a 36-year-old who lives in Moscow and who recently changed his last name to Lenin.
Mr. Keel’s assertion that somehow the efforts of South Carolina officials following the breach may have lessened its impact on citizens seems unlikely. The stolen tax and financial data appears to have been sold openly on cybercrime forums by one of the Russian underground’s most aggressive and successful hacking crews.
While there are no indications from reviewing forum posts that Rescator ever sold the data, his sales threads came at a time when the incidence of tax refund fraud was skyrocketing.
Tax-related identity theft occurs when someone uses a stolen identity and SSN to file a tax return in that person’s name claiming a fraudulent refund. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually owed a refund from the U.S. Internal Revenue Service (IRS).
According to a 2013 report from the Treasury Inspector General’s office, the IRS issued nearly $4 billion in bogus tax refunds in 2012, and more than $5.8 billion in 2013. The money largely was sent to people who stole SSNs and other information on U.S. citizens, and then filed fraudulent tax returns on those individuals claiming a large refund but at a different address.
It remains unclear why Shefel has never been officially implicated in the breaches at Target, Home Depot, or in South Carolina. It may be that Shefel has been indicted, and that those indictments remain sealed for some reason. Perhaps prosecutors were hoping Shefel would decide to leave Russia, at which point it would be easier to apprehend him if he believed no one was looking for him.
But all signs are that Shefel is deeply rooted in Russia, and has no plans to leave. In January 2024, authorities in Australia, the United States and the U.K. levied financial sanctions against 33-year-old Russian man Aleksandr Ermakov for allegedly stealing data on 10 million customers of the Australian health insurance giant Medibank.
A week after those sanctions were put in place, KrebsOnSecurity published a deep dive on Ermakov, which found that he co-ran a Moscow-based IT security consulting business along with Mikhail Shefel called Shtazi-IT.
A Google-translated version of Shtazi dot ru. Image: Archive.org.
The U.S. government is warning that “smart locks” securing entry to an estimated 50,000 dwellings nationwide contain hard-coded credentials that can be used to remotely open any of the locks. The lock’s maker Chirp Systems remains unresponsive, even though it was first notified about the critical weakness in March 2021. Meanwhile, Chirp’s parent company, RealPage, Inc., is being sued by multiple U.S. states for allegedly colluding with landlords to illegally raise rents.
On March 7, 2024, the U.S. Cybersecurity & Infrastructure Security Agency (CISA) warned about a remotely exploitable vulnerability with “low attack complexity” in Chirp Systems smart locks.
“Chirp Access improperly stores credentials within its source code, potentially exposing sensitive information to unauthorized access,” CISA’s alert warned, assigning the bug a CVSS (badness) rating of 9.1 (out of a possible 10). “Chirp Systems has not responded to requests to work with CISA to mitigate this vulnerability.”
Matt Brown, the researcher CISA credits with reporting the flaw, is a senior systems development engineer at Amazon Web Services. Brown said he discovered the weakness and reported it to Chirp in March 2021, after the company that manages his apartment building started using Chirp smart locks and told everyone to install Chirp’s app to get in and out of their apartments.
“I use Android, which has a pretty simple workflow for downloading and decompiling the APK apps,” Brown told KrebsOnSecurity. “Given that I am pretty picky about what I trust on my devices, I downloaded Chirp and after decompiling, found that they were storing passwords and private key strings in a file.”
Using those hard-coded credentials, Brown found an attacker could then connect to an application programming interface (API) that Chirp uses which is managed by smart lock vendor August.com, and use that to enumerate and remotely lock or unlock any door in any building that uses the technology.
Update, April 18, 11:55 a.m. ET: August has provided a statement saying it does not believe August or Yale locks are vulnerable to the hack described by Brown.
“We were recently made aware of a vulnerability disclosure regarding access control systems provided by Chirp, using August and Yale locks in multifamily housing,” the company said. “Upon learning of these reports, we immediately and thoroughly investigated these claims. Our investigation found no evidence that would substantiate the vulnerability claims in either our product or Chirp’s as it relates to our systems.”
Brown said when he complained to his leasing office, they sold him a small $50 key fob that uses Near-Field Communications (NFC) to toggle the lock when he brings the fob close to his front door. But he said the fob doesn’t eliminate the ability for anyone to remotely unlock his front door using the exposed credentials and the Chirp mobile app.
Also, the fobs pass the credentials to his front door over the air in plain text, meaning someone could clone the fob just by bumping against him with a smartphone app made to read and write NFC tags.
Neither August nor Chirp Systems responded to requests for comment. It’s unclear exactly how many apartments and other residences are using the vulnerable Chirp locks, but multiple articles about the company from 2020 state that approximately 50,000 units use Chirp smart locks with August’s API.
Roughly a year before Brown reported the flaw to Chirp Systems, the company was bought by RealPage, a firm founded in 1998 as a developer of multifamily property management and data analytics software. In 2021, RealPage was acquired by the private equity giant Thoma Bravo.
Brown said the exposure he found in Chirp’s products is “an obvious flaw that is super easy to fix.”
“It’s just a matter of them being motivated to do it,” he said. “But they’re part of a private equity company now, so they’re not answerable to anybody. It’s too bad, because it’s not like residents of [the affected] properties have another choice. It’s either agree to use the app or move.”
In October 2022, an investigation by ProPublica examined RealPage’s dominance in the rent-setting software market, and that it found “uses a mysterious algorithm to help landlords push the highest possible rents on tenants.”
“For tenants, the system upends the practice of negotiating with apartment building staff,” ProPublica found. “RealPage discourages bargaining with renters and has even recommended that landlords in some cases accept a lower occupancy rate in order to raise rents and make more money. One of the algorithm’s developers told ProPublica that leasing agents had ‘too much empathy’ compared to computer generated pricing.”
Last year, the U.S. Department of Justice threw its weight behind a massive lawsuit filed by dozens of tenants who are accusing the $9 billion apartment software company of helping landlords collude to inflate rents.
In February 2024, attorneys general for Arizona and the District of Columbia sued RealPage, alleging RealPage’s software helped create a rental monopoly.
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) said today it is investigating a breach at business intelligence company Sisense, whose products are designed to allow companies to view the status of multiple third-party online services in a single dashboard. CISA urged all Sisense customers to reset any credentials and secrets that may have been shared with the company, which is the same advice Sisense gave to its customers Wednesday evening.
New York City based Sisense has more than a thousand customers across a range of industry verticals, including financial services, telecommunications, healthcare and higher education. On April 10, Sisense Chief Information Security Officer Sangram Dash told customers the company had been made aware of reports that “certain Sisense company information may have been made available on what we have been advised is a restricted access server (not generally available on the internet.)”
“We are taking this matter seriously and promptly commenced an investigation,” Dash continued. “We engaged industry-leading experts to assist us with the investigation. This matter has not resulted in an interruption to our business operations. Out of an abundance of caution, and while we continue to investigate, we urge you to promptly rotate any credentials that you use within your Sisense application.”
In its alert, CISA said it was working with private industry partners to respond to a recent compromise discovered by independent security researchers involving Sisense.
“CISA is taking an active role in collaborating with private industry partners to respond to this incident, especially as it relates to impacted critical infrastructure sector organizations,” the sparse alert reads. “We will provide updates as more information becomes available.”
Sisense declined to comment when asked about the veracity of information shared by two trusted sources with close knowledge of the breach investigation. Those sources said the breach appears to have started when the attackers somehow gained access to the company’s Gitlab code repository, and in that repository was a token or credential that gave the bad guys access to Sisense’s Amazon S3 buckets in the cloud.
Customers can use Gitlab either as a solution that is hosted in the cloud at Gitlab.com, or as a self-managed deployment. KrebsOnSecurity understands that Sisense was using the self-managed version of Gitlab.
Both sources said the attackers used the S3 access to copy and exfiltrate several terabytes worth of Sisense customer data, which apparently included millions of access tokens, email account passwords, and even SSL certificates.
The incident raises questions about whether Sisense was doing enough to protect sensitive data entrusted to it by customers, such as whether the massive volume of stolen customer data was ever encrypted while at rest in these Amazon cloud servers.
It is clear, however, that unknown attackers now have all of the credentials that Sisense customers used in their dashboards.
The breach also makes clear that Sisense is somewhat limited in the clean-up actions that it can take on behalf of customers, because access tokens are essentially text files on your computer that allow you to stay logged in for extended periods of time — sometimes indefinitely. And depending on which service we’re talking about, it may be possible for attackers to re-use those access tokens to authenticate as the victim without ever having to present valid credentials.
Beyond that, it is largely up to Sisense customers to decide if and when they change passwords to the various third-party services that they’ve previously entrusted to Sisense.
Earlier today, a public relations firm working with Sisense reached out to learn if KrebsOnSecurity planned to publish any further updates on their breach (KrebsOnSecurity posted a screenshot of the CISO’s customer email to both LinkedIn and Mastodon on Wednesday evening). The PR rep said Sisense wanted to make sure they had an opportunity to comment before the story ran.
But when confronted with the details shared by my sources, Sisense apparently changed its mind.
“After consulting with Sisense, they have told me that they don’t wish to respond,” the PR rep said in an emailed reply.
Update, 6:49 p.m., ET: Added clarification that Sisense is using a self-hosted version of Gitlab, not the cloud version managed by Gitlab.com.
Also, Sisense’s CISO Dash just sent an update to customers directly. The latest advice from the company is far more detailed, and involves resetting a potentially large number of access tokens across multiple technologies, including Microsoft Active Directory credentials, GIT credentials, web access tokens, and any single sign-on (SSO) secrets or tokens.
The full message from Dash to customers is below:
“Good Afternoon,
We are following up on our prior communication of April 10, 2024, regarding reports that certain Sisense company information may have been made available on a restricted access server. As noted, we are taking this matter seriously and our investigation remains ongoing.
Our customers must reset any keys, tokens, or other credentials in their environment used within the Sisense application.
Specifically, you should:
– Change Your Password: Change all Sisense-related passwords on http://my.sisense.com
– Non-SSO:
– Replace the Secret in the Base Configuration Security section with your GUID/UUID.
– Reset passwords for all users in the Sisense application.
– Logout all users by running GET /api/v1/authentication/logout_all under Admin user.
– Single Sign-On (SSO):
– If you use SSO JWT for the user’s authentication in Sisense, you will need to update sso.shared_secret in Sisense and then use the newly generated value on the side of the SSO handler.
– We strongly recommend rotating the x.509 certificate for your SSO SAML identity provider.
– If you utilize OpenID, it’s imperative to rotate the client secret as well.
– Following these adjustments, update the SSO settings in Sisense with the revised values.
– Logout all users by running GET /api/v1/authentication/logout_all under Admin user.
– Customer Database Credentials: Reset credentials in your database that were used in the Sisense application to ensure continuity of connection between the systems.
– Data Models: Change all usernames and passwords in the database connection string in the data models.
– User Params: If you are using the User Params feature, reset them.
– Active Directory/LDAP: Change the username and user password of users whose authorization is used for AD synchronization.
– HTTP Authentication for GIT: Rotate the credentials in every GIT project.
– B2D Customers: Use the following API PATCH api/v2/b2d-connection in the admin section to update the B2D connection.
– Infusion Apps: Rotate the associated keys.
– Web Access Token: Rotate all tokens.
– Custom Email Server: Rotate associated credentials.
– Custom Code: Reset any secrets that appear in custom code Notebooks.
If you need any assistance, please submit a customer support ticket at https://community.sisense.com/t5/support-portal/bd-p/SupportPortal and mark it as critical. We have a dedicated response team on standby to assist with your requests.
At Sisense, we give paramount importance to security and are committed to our customers’ success. Thank you for your partnership and commitment to our mutual security.
Regards,
Sangram Dash
Chief Information Security Officer”
On April 9, Twitter/X began automatically modifying links that mention “twitter.com” to read “x.com” instead. But over the past 48 hours, dozens of new domain names have been registered that demonstrate how this change could be used to craft convincing phishing links — such as fedetwitter[.]com, which until very recently rendered as fedex.com in tweets.
The message displayed when one visits goodrtwitter.com, which Twitter/X displayed as goodrx.com in tweets and messages.
A search at DomainTools.com shows at least 60 domain names have been registered over the past two days for domains ending in “twitter.com,” although research so far shows the majority of these domains have been registered “defensively” by private individuals to prevent the domains from being purchased by scammers.
Those include carfatwitter.com, which Twitter/X truncated to carfax.com when the domain appeared in user messages or tweets. Visiting this domain currently displays a message that begins, “Are you serious, X Corp?”
Update: It appears Twitter/X has corrected its mistake, and no longer truncates any domain ending in “twitter.com” to “x.com.”
Original story:
The same message is on other newly registered domains, including goodrtwitter.com (goodrx.com), neobutwitter.com (neobux.com), roblotwitter.com (roblox.com), square-enitwitter.com (square-enix.com) and yandetwitter.com (yandex.com). The message left on these domains indicates they were defensively registered by a user on Mastodon whose bio says they are a systems admin/engineer. That profile has not responded to requests for comment.
A number of these new domains including “twitter.com” appear to be registered defensively by Twitter/X users in Japan. The domain netflitwitter.com (netflix.com, to Twitter/X users) now displays a message saying it was “acquired to prevent its use for malicious purposes,” along with a Twitter/X username.
The domain mentioned at the beginning of this story — fedetwitter.com — redirects users to the blog of a Japanese technology enthusiast. A user with the handle “amplest0e” appears to have registered space-twitter.com, which Twitter/X users would see as the CEO’s “space-x.com.” The domain “ametwitter.com” already redirects to the real americanexpress.com.
Some of the domains registered recently and ending in “twitter.com” currently do not resolve and contain no useful contact information in their registration records. Those include firefotwitter[.]com (firefox.com), ngintwitter[.]com (nginx.com), and webetwitter[.]com (webex.com).
The domain setwitter.com, which Twitter/X until very recently rendered as “sex.com,” redirects to this blog post warning about the recent changes and their potential use for phishing.
Sean McNee, vice president of research and data at DomainTools, told KrebsOnSecurity it appears Twitter/X did not properly limit its redirection efforts.
“Bad actors could register domains as a way to divert traffic from legitimate sites or brands given the opportunity — many such brands in the top million domains end in x, such as webex, hbomax, xerox, xbox, and more,” McNee said. “It is also notable that several other globally popular brands, such as Rolex and Linux, were also on the list of registered domains.”
The apparent oversight by Twitter/X was cause for amusement and amazement from many former users who have migrated to other social media platforms since the new CEO took over. Matthew Garrett, a lecturer at U.C. Berkeley’s School of Information, summed up the Schadenfreude thusly:
“Twitter just doing a ‘redirect links in tweets that go to x.com to twitter.com instead but accidentally do so for all domains that end x.com like eg spacex.com going to spacetwitter.com’ is not absolutely the funniest thing I could imagine but it’s high up there.”
If only Patch Tuesdays came around infrequently — like total solar eclipse rare — instead of just creeping up on us each month like The Man in the Moon. Although to be fair, it would be tough for Microsoft to eclipse the number of vulnerabilities fixed in this month’s patch batch — a record 147 flaws in Windows and related software.
Yes, you read that right. Microsoft today released updates to address 147 security holes in Windows, Office, Azure, .NET Framework, Visual Studio, SQL Server, DNS Server, Windows Defender, Bitlocker, and Windows Secure Boot.
“This is the largest release from Microsoft this year and the largest since at least 2017,” said Dustin Childs, from Trend Micro’s Zero Day Initiative (ZDI). “As far as I can tell, it’s the largest Patch Tuesday release from Microsoft of all time.”
Tempering the sheer volume of this month’s patches is the middling severity of many of the bugs. Only three of April’s vulnerabilities earned Microsoft’s most-dire “critical” rating, meaning they can be abused by malware or malcontents to take remote control over unpatched systems with no help from users.
Most of the flaws that Microsoft deems “more likely to be exploited” this month are marked as “important,” which usually involve bugs that require a bit more user interaction (social engineering) but which nevertheless can result in system security bypass, compromise, and the theft of critical assets.
Ben McCarthy, lead cyber security engineer at Immersive Labs called attention to CVE-2024-20670, an Outlook for Windows spoofing vulnerability described as being easy to exploit. It involves convincing a user to click on a malicious link in an email, which can then steal the user’s password hash and authenticate as the user in another Microsoft service.
Another interesting bug McCarthy pointed to is CVE-2024-29063, which involves hard-coded credentials in Azure’s search backend infrastructure that could be gleaned by taking advantage of Azure AI search.
“This along with many other AI attacks in recent news shows a potential new attack surface that we are just learning how to mitigate against,” McCarthy said. “Microsoft has updated their backend and notified any customers who have been affected by the credential leakage.”
CVE-2024-29988 is a weakness that allows attackers to bypass Windows SmartScreen, a technology Microsoft designed to provide additional protections for end users against phishing and malware attacks. Childs said one of ZDI’s researchers found this vulnerability being exploited in the wild, although Microsoft doesn’t currently list CVE-2024-29988 as being exploited.
“I would treat this as in the wild until Microsoft clarifies,” Childs said. “The bug itself acts much like CVE-2024-21412 – a [zero-day threat from February] that bypassed the Mark of the Web feature and allows malware to execute on a target system. Threat actors are sending exploits in a zipped file to evade EDR/NDR detection and then using this bug (and others) to bypass Mark of the Web.”
Update, 7:46 p.m. ET: A previous version of this story said there were no zero-day vulnerabilities fixed this month. BleepingComputer reports that Microsoft has since confirmed that there are actually two zero-days. One is the flaw Childs just mentioned (CVE-2024-21412), and the other is CVE-2024-26234, described as a “proxy driver spoofing” weakness.
Satnam Narang at Tenable notes that this month’s release includes fixes for two dozen flaws in Windows Secure Boot, the majority of which are considered “Exploitation Less Likely” according to Microsoft.
“However, the last time Microsoft patched a flaw in Windows Secure Boot in May 2023 had a notable impact as it was exploited in the wild and linked to the BlackLotus UEFI bootkit, which was sold on dark web forums for $5,000,” Narang said. “BlackLotus can bypass functionality called secure boot, which is designed to block malware from being able to load when booting up. While none of these Secure Boot vulnerabilities addressed this month were exploited in the wild, they serve as a reminder that flaws in Secure Boot persist, and we could see more malicious activity related to Secure Boot in the future.”
For links to individual security advisories indexed by severity, check out ZDI’s blog and the Patch Tuesday post from the SANS Internet Storm Center. Please consider backing up your data or your drive before updating, and drop a note in the comments here if you experience any issues applying these fixes.
Adobe today released nine patches tackling at least two dozen vulnerabilities in a range of software products, including Adobe After Effects, Photoshop, Commerce, InDesign, Experience Manager, Media Encoder, Bridge, Illustrator, and Adobe Animate.
KrebsOnSecurity needs to correct the record on a point mentioned at the end of March’s “Fat Patch Tuesday” post, which looked at new AI capabilities built into Adobe Acrobat that are turned on by default. Adobe has since clarified that its apps won’t use AI to auto-scan your documents, as the original language in its FAQ suggested.
“In practice, no document scanning or analysis occurs unless a user actively engages with the AI features by agreeing to the terms, opening a document, and selecting the AI Assistant or generative summary buttons for that specific document,” Adobe said earlier this month.
A cybercrook who has been setting up websites that mimic the self-destructing message service privnote.com accidentally exposed the breadth of their operations recently when they threatened to sue a software company. The disclosure revealed a profitable network of phishing sites that behave and look like the real Privnote, except that any messages containing cryptocurrency addresses will be automatically altered to include a different payment address controlled by the scammers.
The real Privnote, at privnote.com.
Launched in 2008, privnote.com employs technology that encrypts each message so that even Privnote itself cannot read its contents. And it doesn’t send or receive messages. Creating a message merely generates a link. When that link is clicked or visited, the service warns that the message will be gone forever after it is read.
Privnote’s ease-of-use and popularity among cryptocurrency enthusiasts has made it a perennial target of phishers, who erect Privnote clones that function more or less as advertised but also quietly inject their own cryptocurrency payment addresses when a note is created that contains crypto wallets.
Last month, a new user on GitHub named fory66399 lodged a complaint on the “issues” page for MetaMask, a software cryptocurrency wallet used to interact with the Ethereum blockchain. Fory66399 insisted that their website — privnote[.]co — was being wrongly flagged by MetaMask’s “eth-phishing-detect” list as malicious.
“We filed a lawsuit with a lawyer for dishonestly adding a site to the block list, damaging reputation, as well as ignoring the moderation department and ignoring answers!” fory66399 threatened. “Provide evidence or I will demand compensation!”
MetaMask’s lead product manager Taylor Monahan replied by posting several screenshots of privnote[.]co showing the site did indeed swap out any cryptocurrency addresses.
After being told where they could send a copy of their lawsuit, Fory66399 appeared to become flustered, and proceeded to mention a number of other interesting domain names:
You sent me screenshots from some other site! It’s red!!!!
The tornote.io website has a different color altogether
The privatenote,io website also has a different color! What’s wrong?????
A search at DomainTools.com for privatenote[.]io shows it has been registered to two names over as many years, including Andrey Sokol from Moscow and Alexandr Ermakov from Kiev. There is no indication these are the real names of the phishers, but the names are useful in pointing to other sites targeting Privnote since 2020.
DomainTools says other domains registered to Alexandr Ermakov include pirvnota[.]com, privatemessage[.]net, privatenote[.]io, and tornote[.]io.
A screenshot of the phishing domain privatemessage dot net.
The registration records for pirvnota[.]com at one point were updated from Andrey Sokol to “BPW” as the registrant organization, and “Tambov district” in the registrant state/province field. Searching DomainTools for domains that include both of these terms reveals pirwnote[.]com.
Other Privnote phishing domains that also phoned home to the same Internet address as pirwnote[.]com include privnode[.]com, privnate[.]com, and prevnóte[.]com. Pirwnote[.]com is currently selling security cameras made by the Chinese manufacturer Hikvision, via an Internet address based in Hong Kong.
It appears someone has gone to great lengths to make tornote[.]io seem like a legitimate website. For example, this account at Medium has authored more than a dozen blog posts in the past year singing the praises of Tornote as a secure, self-destructing messaging service. However, testing shows tornote[.]io will also replace any cryptocurrency addresses in messages with their own payment address.
These malicious note sites attract visitors by gaming search engine results to make the phishing domains appear prominently in search results for “privnote.” A search in Google for “privnote” currently returns tornote[.]io as the fifth result. Like other phishing sites tied to this network, Tornote will use the same cryptocurrency addresses for roughly 5 days, and then rotate in new payment addresses.
Tornote changed the cryptocurrency address entered into a test note to this address controlled by the phishers.
Throughout 2023, Tornote was hosted with the Russian provider DDoS-Guard, at the Internet address 186.2.163[.]216. A review of the passive DNS records tied to this address shows that apart from subdomains dedicated to tornote[.]io, the main other domain at this address was hkleaks[.]ml.
In August 2019, a slew of websites and social media channels dubbed “HKLEAKS” began doxing the identities and personal information of pro-democracy activists in Hong Kong. According to a report (PDF) from Citizen Lab, hkleaks[.]ml was the second domain that appeared as the perpetrators began to expand the list of those doxed.
HKleaks, as indexed by The Wayback Machine.
DomainTools shows there are more than 1,000 other domains whose registration records include the organization name “BPW” and “Tambov District” as the location. Virtually all of those domains were registered through one of two registrars — Hong Kong-based Nicenic and Singapore-based WebCC — and almost all appear to be phishing or pill-spam related.
Among those is rustraitor[.]info, a website erected after Russia invaded Ukraine in early 2022 that doxed Russians perceived to have helped the Ukrainian cause.
An archive.org copy of Rustraitor.
In keeping with the overall theme, these phishing domains appear focused on stealing usernames and passwords to some of the cybercrime underground’s busiest shops, including Brian’s Club. What do all the phished sites have in common? They all accept payment via virtual currencies.
It appears MetaMask’s Monahan made the correct decision in forcing these phishers to tip their hand: Among the websites at that DDoS-Guard address are multiple MetaMask phishing domains, including metarrnask[.]com, meternask[.]com, and rnetamask[.]com.
How profitable are these private note phishing sites? Reviewing the four malicious cryptocurrency payment addresses that the attackers swapped into notes passed through privnote[.]co (as pictured in Monahan’s screenshot above) shows that between March 15 and March 19, 2024, those address raked in and transferred out nearly $18,000 in cryptocurrencies. And that’s just one of their phishing websites.
Roughly nine years ago, KrebsOnSecurity profiled a Pakistan-based cybercrime group called “The Manipulaters,” a sprawling web hosting network of phishing and spam delivery platforms. In January 2024, The Manipulaters pleaded with this author to unpublish previous stories about their work, claiming the group had turned over a new leaf and gone legitimate. But new research suggests that while they have improved the quality of their products and services, these nitwits still fail spectacularly at hiding their illegal activities.
In May 2015, KrebsOnSecurity published a brief writeup about the brazen Manipulaters team, noting that they openly operated hundreds of web sites selling tools designed to trick people into giving up usernames and passwords, or deploying malicious software on their PCs.
Manipulaters advertisement for “Office 365 Private Page with Antibot” phishing kit sold on the domain heartsender,com. “Antibot” refers to functionality that attempts to evade automated detection techniques, keeping a phish deployed as long as possible. Image: DomainTools.
The core brand of The Manipulaters has long been a shared cybercriminal identity named “Saim Raza,” who for the past decade has peddled a popular spamming and phishing service variously called “Fudtools,” “Fudpage,” “Fudsender,” “FudCo,” etc. The term “FUD” in those names stands for “Fully Un-Detectable,” and it refers to cybercrime resources that will evade detection by security tools like antivirus software or anti-spam appliances.
A September 2021 story here checked in on The Manipulaters, and found that Saim Raza and company were prospering under their FudCo brands, which they secretly managed from a front company called We Code Solutions.
That piece worked backwards from all of the known Saim Raza email addresses to identify Facebook profiles for multiple We Code Solutions employees, many of whom could be seen celebrating company anniversaries gathered around a giant cake with the words “FudCo” painted in icing.
Since that story ran, KrebsOnSecurity has heard from this Saim Raza identity on two occasions. The first was in the weeks following the Sept. 2021 piece, when one of Saim Raza’s known email addresses — bluebtcus@gmail.com — pleaded to have the story taken down.
“Hello, we already leave that fud etc before year,” the Saim Raza identity wrote. “Why you post us? Why you destroy our lifes? We never harm anyone. Please remove it.”
Not wishing to be manipulated by a phishing gang, KrebsOnSecurity ignored those entreaties. But on Jan. 14, 2024, KrebsOnSecurity heard from the same bluebtcus@gmail.com address, apropos of nothing.
“Please remove this article,” Sam Raza wrote, linking to the 2021 profile. “Please already my police register case on me. I already leave everything.”
Asked to elaborate on the police investigation, Saim Raza said they were freshly released from jail.
“I was there many days,” the reply explained. “Now back after bail. Now I want to start my new work.”
Exactly what that “new work” might entail, Saim Raza wouldn’t say. But a new report from researchers at DomainTools.com finds that several computers associated with The Manipulaters have been massively hacked by malicious data- and password-snarfing malware for quite some time.
DomainTools says the malware infections on Manipulaters PCs exposed “vast swaths of account-related data along with an outline of the group’s membership, operations, and position in the broader underground economy.”
“Curiously, the large subset of identified Manipulaters customers appear to be compromised by the same stealer malware,” DomainTools wrote. “All observed customer malware infections began after the initial compromise of Manipulaters PCs, which raises a number of questions regarding the origin of those infections.”
A number of questions, indeed. The core Manipulaters product these days is a spam delivery service called HeartSender, whose homepage openly advertises phishing kits targeting users of various Internet companies, including Microsoft 365, Yahoo, AOL, Intuit, iCloud and ID.me, to name a few.
A screenshot of the homepage of HeartSender 4 displays an IP address tied to fudtoolshop@gmail.com. Image: DomainTools.
HeartSender customers can interact with the subscription service via the website, but the product appears to be far more effective and user-friendly if one downloads HeartSender as a Windows executable program. Whether that HeartSender program was somehow compromised and used to infect the service’s customers is unknown.
However, DomainTools also found the hosted version of HeartSender service leaks an extraordinary amount of user information that probably is not intended to be publicly accessible. Apparently, the HeartSender web interface has several webpages that are accessible to unauthenticated users, exposing customer credentials along with support requests to HeartSender developers.
“Ironically, the Manipulaters may create more short-term risk to their own customers than law enforcement,” DomainTools wrote. “The data table “User Feedbacks” (sic) exposes what appear to be customer authentication tokens, user identifiers, and even a customer support request that exposes root-level SMTP credentials–all visible by an unauthenticated user on a Manipulaters-controlled domain. Given the risk for abuse, this domain will not be published.”
This is hardly the first time The Manipulaters have shot themselves in the foot. In 2019, The Manipulaters failed to renew their core domain name — manipulaters[.]com — the same one tied to so many of the company’s past and current business operations. That domain was quickly scooped up by Scylla Intel, a cyber intelligence firm that focuses on connecting cybercriminals to their real-life identities.
Currently, The Manipulaters seem focused on building out and supporting HeartSender, which specializes in spam and email-to-SMS spamming services.
“The Manipulaters’ newfound interest in email-to-SMS spam could be in response to the massive increase in smishing activity impersonating the USPS,” DomainTools wrote. “Proofs posted on HeartSender’s Telegram channel contain numerous references to postal service impersonation, including proving delivery of USPS-themed phishing lures and the sale of a USPS phishing kit.”
Reached via email, the Saim Raza identity declined to respond to questions about the DomainTools findings.
“First [of] all we never work on virus or compromised computer etc,” Raza replied. “If you want to write like that fake go ahead. Second I leave country already. If someone bind anything with exe file and spread on internet its not my fault.”
Asked why they left Pakistan, Saim Raza said the authorities there just wanted to shake them down.
“After your article our police put FIR on my [identity],” Saim Raza explained. “FIR” in this case stands for “First Information Report,” which is the initial complaint in the criminal justice system of Pakistan.
“They only get money from me nothing else,” Saim Raza continued. “Now some officers ask for money again again. Brother, there is no good law in Pakistan just they need money.”
Saim Raza has a history of being slippery with the truth, so who knows whether The Manipulaters and/or its leaders have in fact fled Pakistan (it may be more of an extended vacation abroad). With any luck, these guys will soon venture into a more Western-friendly, “good law” nation and receive a warm welcome by the local authorities.
Thread hijacking attacks. They happen when someone you know has their email account compromised, and you are suddenly dropped into an existing conversation between the sender and someone else. These missives draw on the recipient’s natural curiosity about being copied on a private discussion, which is modified to include a malicious link or attachment. Here’s the story of a thread hijacking attack in which a journalist was copied on a phishing email from the unwilling subject of a recent scoop.
In Sept. 2023, the Pennsylvania news outlet LancasterOnline.com published a story about Adam Kidan, a wealthy businessman with a criminal past who is a major donor to Republican causes and candidates, including Rep. Lloyd Smucker (R-Pa).
The LancasterOnline story about Adam Kidan.
Several months after that piece ran, the story’s author Brett Sholtis received two emails from Kidan, both of which contained attachments. One of the messages appeared to be a lengthy conversation between Kidan and a colleague, with the subject line, “Re: Successfully sent data.” The second missive was a more brief email from Kidan with the subject, “Acknowledge New Work Order,” and a message that read simply, “Please find the attached.”
Sholtis said he clicked the attachment in one of the messages, which then launched a web page that looked exactly like a Microsoft Office 365 login page. An analysis of the webpage reveals it would check any submitted credentials at the real Microsoft website, and return an error if the user entered bogus account information. A successful login would record the submitted credentials and forward the victim to the real Microsoft website.
But Sholtis said he didn’t enter his Outlook username and password. Instead, he forwarded the messages to LancasterOneline’s IT team, which quickly flagged them as phishing attempts.
LancasterOnline Executive Editor Tom Murse said the two phishing messages from Mr. Kidan raised eyebrows in the newsroom because Kidan had threatened to sue the news outlet multiple times over Sholtis’s story.
“We were just perplexed,” Murse said. “It seemed to be a phishing attempt but we were confused why it would come from a prominent businessman we’ve written about. Our initial response was confusion, but we didn’t know what else to do with it other than to send it to the FBI.”
The phishing lure attached to the thread hijacking email from Mr. Kidan.
In 2006, Kidan was sentenced to 70 months in federal prison after pleading guilty to defrauding lenders along with Jack Abramoff, the disgraced lobbyist whose corruption became a symbol of the excesses of Washington influence peddling. He was paroled in 2009, and in 2014 moved his family to a home in Lancaster County, Pa.
The FBI hasn’t responded to LancasterOnline’s tip. Messages sent by KrebsOnSecurity to Kidan’s emails addresses were returned as blocked. Messages left with Mr. Kidan’s company, Empire Workforce Solutions, went unreturned.
No doubt the FBI saw the messages from Kidan for what they likely were: The result of Mr. Kidan having his Microsoft Outlook account compromised and used to send malicious email to people in his contacts list.
Thread hijacking attacks are hardly new, but that is mainly true because many Internet users still don’t know how to identify them. The email security firm Proofpoint says it has tracked north of 90 million malicious messages in the last five years that leverage this attack method.
One key reason thread hijacking is so successful is that these attacks generally do not include the tell that exposes most phishing scams: A fabricated sense of urgency. A majority of phishing threats warn of negative consequences should you fail to act quickly — such as an account suspension or an unauthorized high-dollar charge going through.
In contrast, thread hijacking campaigns tend to patiently prey on the natural curiosity of the recipient.
Ryan Kalember, chief strategy officer at Proofpoint, said probably the most ubiquitous examples of thread hijacking are “CEO fraud” or “business email compromise” scams, wherein employees are tricked by an email from a senior executive into wiring millions of dollars to fraudsters overseas.
But Kalember said these low-tech attacks can nevertheless be quite effective because they tend to catch people off-guard.
“It works because you feel like you’re suddenly included in an important conversation,” Kalember said. “It just registers a lot differently when people start reading, because you think you’re observing a private conversation between two different people.”
Some thread hijacking attacks actually involve multiple threat actors who are actively conversing while copying — but not addressing — the recipient.
“We call these multi-persona phishing scams, and they’re often paired with thread hijacking,” Kalember said. “It’s basically a way to build a little more affinity than just copying people on an email. And the longer the conversation goes on, the higher their success rate seems to be because some people start replying to the thread [and participating] psycho-socially.”
The best advice to sidestep phishing scams is to avoid clicking on links or attachments that arrive unbidden in emails, text messages and other mediums. If you’re unsure whether the message is legitimate, take a deep breath and visit the site or service in question manually — ideally, using a browser bookmark so as to avoid potential typosquatting sites.
Several Apple customers recently reported being targeted in elaborate phishing attacks that involve what appears to be a bug in Apple’s password reset feature. In this scenario, a target’s Apple devices are forced to display dozens of system-level prompts that prevent the devices from being used until the recipient responds “Allow” or “Don’t Allow” to each prompt. Assuming the user manages not to fat-finger the wrong button on the umpteenth password reset request, the scammers will then call the victim while spoofing Apple support in the caller ID, saying the user’s account is under attack and that Apple support needs to “verify” a one-time code.
Some of the many notifications Patel says he received from Apple all at once.
Parth Patel is an entrepreneur who is trying to build a startup in the conversational AI space. On March 23, Patel documented on Twitter/X a recent phishing campaign targeting him that involved what’s known as a “push bombing” or “MFA fatigue” attack, wherein the phishers abuse a feature or weakness of a multi-factor authentication (MFA) system in a way that inundates the target’s device(s) with alerts to approve a password change or login.
“All of my devices started blowing up, my watch, laptop and phone,” Patel told KrebsOnSecurity. “It was like this system notification from Apple to approve [a reset of the account password], but I couldn’t do anything else with my phone. I had to go through and decline like 100-plus notifications.”
Some people confronted with such a deluge may eventually click “Allow” to the incessant password reset prompts — just so they can use their phone again. Others may inadvertently approve one of these prompts, which will also appear on a user’s Apple watch if they have one.
But the attackers in this campaign had an ace up their sleeves: Patel said after denying all of the password reset prompts from Apple, he received a call on his iPhone that said it was from Apple Support (the number displayed was 1-800-275-2273, Apple’s real customer support line).
“I pick up the phone and I’m super suspicious,” Patel recalled. “So I ask them if they can verify some information about me, and after hearing some aggressive typing on his end he gives me all this information about me and it’s totally accurate.”
All of it, that is, except his real name. Patel said when he asked the fake Apple support rep to validate the name they had on file for the Apple account, the caller gave a name that was not his but rather one that Patel has only seen in background reports about him that are for sale at a people-search website called PeopleDataLabs.
Patel said he has worked fairly hard to remove his information from multiple people-search websites, and he found PeopleDataLabs uniquely and consistently listed this inaccurate name as an alias on his consumer profile.
“For some reason, PeopleDataLabs has three profiles that come up when you search for my info, and two of them are mine but one is an elementary school teacher from the midwest,” Patel said. “I asked them to verify my name and they said Anthony.”
Patel said the goal of the voice phishers is to trigger an Apple ID reset code to be sent to the user’s device, which is a text message that includes a one-time password. If the user supplies that one-time code, the attackers can then reset the password on the account and lock the user out. They can also then remotely wipe all of the user’s Apple devices.
Chris is a cryptocurrency hedge fund owner who asked that only his first name be used so as not to paint a bigger target on himself. Chris told KrebsOnSecurity he experienced a remarkably similar phishing attempt in late February.
“The first alert I got I hit ‘Don’t Allow’, but then right after that I got like 30 more notifications in a row,” Chris said. “I figured maybe I sat on my phone weird, or was accidentally pushing some button that was causing these, and so I just denied them all.”
Chris says the attackers persisted hitting his devices with the reset notifications for several days after that, and at one point he received a call on his iPhone that said it was from Apple support.
“I said I would call them back and hung up,” Chris said, demonstrating the proper response to such unbidden solicitations. “When I called back to the real Apple, they couldn’t say whether anyone had been in a support call with me just then. They just said Apple states very clearly that it will never initiate outbound calls to customers — unless the customer requests to be contacted.”
Massively freaking out that someone was trying to hijack his digital life, Chris said he changed his passwords and then went to an Apple store and bought a new iPhone. From there, he created a new Apple iCloud account using a brand new email address.
Chris said he then proceeded to get even more system alerts on his new iPhone and iCloud account — all the while still sitting at the local Apple Genius Bar.
Chris told KrebsOnSecurity his Genius Bar tech was mystified about the source of the alerts, but Chris said he suspects that whatever the phishers are abusing to rapidly generate these Apple system alerts requires knowing the phone number on file for the target’s Apple account. After all, that was the only aspect of Chris’s new iPhone and iCloud account that hadn’t changed.
“Ken” is a security industry veteran who spoke on condition of anonymity. Ken said he first began receiving these unsolicited system alerts on his Apple devices earlier this year, but that he has not received any phony Apple support calls as others have reported.
“This recently happened to me in the middle of the night at 12:30 a.m.,” Ken said. “And even though I have my Apple watch set to remain quiet during the time I’m usually sleeping at night, it woke me up with one of these alerts. Thank god I didn’t press ‘Allow,’ which was the first option shown on my watch. I had to scroll watch the wheel to see and press the ‘Don’t Allow’ button.”
Ken shared this photo he took of an alert on his watch that woke him up at 12:30 a.m. Ken said he had to scroll on the watch face to see the “Don’t Allow” button.
Ken didn’t know it when all this was happening (and it’s not at all obvious from the Apple prompts), but clicking “Allow” would not have allowed the attackers to change Ken’s password. Rather, clicking “Allow” displays a six digit PIN that must be entered on Ken’s device — allowing Ken to change his password. It appears that these rapid password reset prompts are being used to make a subsequent inbound phone call spoofing Apple more believable.
Ken said he contacted the real Apple support and was eventually escalated to a senior Apple engineer. The engineer assured Ken that turning on an Apple Recovery Key for his account would stop the notifications once and for all.
A recovery key is an optional security feature that Apple says “helps improve the security of your Apple ID account.” It is a randomly generated 28-character code, and when you enable a recovery key it is supposed to disable Apple’s standard account recovery process. The thing is, enabling it is not a simple process, and if you ever lose that code in addition to all of your Apple devices you will be permanently locked out.
Ken said he enabled a recovery key for his account as instructed, but that it hasn’t stopped the unbidden system alerts from appearing on all of his devices every few days.
KrebsOnSecurity tested Ken’s experience, and can confirm that enabling a recovery key does nothing to stop a password reset prompt from being sent to associated Apple devices. Visiting Apple’s “forgot password” page — https://iforgot.apple.com — asks for an email address and for the visitor to solve a CAPTCHA.
After that, the page will display the last two digits of the phone number tied to the Apple account. Filling in the missing digits and hitting submit on that form will send a system alert, whether or not the user has enabled an Apple Recovery Key.
The password reset page at iforgot.apple.com.
What sanely designed authentication system would send dozens of requests for a password change in the span of a few moments, when the first requests haven’t even been acted on by the user? Could this be the result of a bug in Apple’s systems?
Apple has not yet responded to requests for comment.
Throughout 2022, a criminal hacking group known as LAPSUS$ used MFA bombing to great effect in intrusions at Cisco, Microsoft and Uber. In response, Microsoft began enforcing “MFA number matching,” a feature that displays a series of numbers to a user attempting to log in with their credentials. These numbers must then be entered into the account owner’s Microsoft authenticator app on their mobile device to verify they are logging into the account.
Kishan Bagaria is a hobbyist security researcher and engineer who founded the website texts.com (now owned by Automattic), and he’s convinced Apple has a problem on its end. In August 2019, Bagaria reported to Apple a bug that allowed an exploit he dubbed “AirDoS” because it could be used to let an attacker infinitely spam all nearby iOS devices with a system-level prompt to share a file via AirDrop — a file-sharing capability built into Apple products.
Apple fixed that bug nearly four months later in December 2019, thanking Bagaria in the associated security bulletin. Bagaria said Apple’s fix was to add stricter rate limiting on AirDrop requests, and he suspects that someone has figured out a way to bypass Apple’s rate limit on how many of these password reset requests can be sent in a given timeframe.
“I think this could be a legit Apple rate limit bug that should be reported,” Bagaria said.
Apple seems requires a phone number to be on file for your account, but after you’ve set up the account it doesn’t have to be a mobile phone number. KrebsOnSecurity’s testing shows Apple will accept a VOIP number (like Google Voice). So, changing your account phone number to a VOIP number that isn’t widely known would be one mitigation here.
One caveat with the VOIP number idea: Unless you include a real mobile number, Apple’s iMessage and Facetime applications will be disabled for that device. This might a bonus for those concerned about reducing the overall attack surface of their Apple devices, since zero-click zero-days in these applications have repeatedly been used by spyware purveyors.
Also, it appears Apple’s password reset system will accept and respect email aliases. Adding a “+” character after the username portion of your email address — followed by a notation specific to the site you’re signing up at — lets you create an infinite number of unique email addresses tied to the same account.
For instance, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to my inbox and create a corresponding folder called “Example,” along with a new filter that sends any email addressed to that alias to the Example folder. In this case, however, perhaps a less obvious alias than “+apple” would be advisable.
Update, March 27, 5:06 p.m. ET: Added perspective on Ken’s experience. Also included a What Can You Do? section.
The nonprofit organization that supports the Firefox web browser said today it is winding down its new partnership with Onerep, an identity protection service recently bundled with Firefox that offers to remove users from hundreds of people-search sites. The move comes just days after a report by KrebsOnSecurity forced Onerep’s CEO to admit that he has founded dozens of people-search networks over the years.
Mozilla only began bundling Onerep in Firefox last month, when it announced the reputation service would be offered on a subscription basis as part of Mozilla Monitor Plus. Launched in 2018 under the name Firefox Monitor, Mozilla Monitor also checks data from the website Have I Been Pwned? to let users know when their email addresses or password are leaked in data breaches.
On March 14, KrebsOnSecurity published a story showing that Onerep’s Belarusian CEO and founder Dimitiri Shelest launched dozens of people-search services since 2010, including a still-active data broker called Nuwber that sells background reports on people. Onerep and Shelest did not respond to requests for comment on that story.
But on March 21, Shelest released a lengthy statement wherein he admitted to maintaining an ownership stake in Nuwber, a consumer data broker he founded in 2015 — around the same time he launched Onerep.
Shelest maintained that Nuwber has “zero cross-over or information-sharing with Onerep,” and said any other old domains that may be found and associated with his name are no longer being operated by him.
“I get it,” Shelest wrote. “My affiliation with a people search business may look odd from the outside. In truth, if I hadn’t taken that initial path with a deep dive into how people search sites work, Onerep wouldn’t have the best tech and team in the space. Still, I now appreciate that we did not make this more clear in the past and I’m aiming to do better in the future.” The full statement is available here (PDF).
Onerep CEO and founder Dimitri Shelest.
In a statement released today, a spokesperson for Mozilla said it was moving away from Onerep as a service provider in its Monitor Plus product.
“Though customer data was never at risk, the outside financial interests and activities of Onerep’s CEO do not align with our values,” Mozilla wrote. “We’re working now to solidify a transition plan that will provide customers with a seamless experience and will continue to put their interests first.”
KrebsOnSecurity also reported that Shelest’s email address was used circa 2010 by an affiliate of Spamit, a Russian-language organization that paid people to aggressively promote websites hawking male enhancement drugs and generic pharmaceuticals. As noted in the March 14 story, this connection was confirmed by research from multiple graduate students at my alma mater George Mason University.
Shelest denied ever being associated with Spamit. “Between 2010 and 2014, we put up some web pages and optimize them — a widely used SEO practice — and then ran AdSense banners on them,” Shelest said, presumably referring to the dozens of people-search domains KrebsOnSecurity found were connected to his email addresses (dmitrcox@gmail.com and dmitrcox2@gmail.com). “As we progressed and learned more, we saw that a lot of the inquiries coming in were for people.”
Shelest also acknowledged that Onerep pays to run ads on “on a handful of data broker sites in very specific circumstances.”
“Our ad is served once someone has manually completed an opt-out form on their own,” Shelest wrote. “The goal is to let them know that if they were exposed on that site, there may be others, and bring awareness to there being a more automated opt-out option, such as Onerep.”
Reached via Twitter/X, HaveIBeenPwned founder Troy Hunt said he knew Mozilla was considering a partnership with Onerep, but that he was previously unaware of the Onerep CEO’s many conflicts of interest.
“I knew Mozilla had this in the works and we’d casually discussed it when talking about Firefox monitor,” Hunt told KrebsOnSecurity. “The point I made to them was the same as I’ve made to various companies wanting to put data broker removal ads on HIBP: removing your data from legally operating services has minimal impact, and you can’t remove it from the outright illegal ones who are doing the genuine damage.”
Playing both sides — creating and spreading the same digital disease that your medicine is designed to treat — may be highly unethical and wrong. But in the United States it’s not against the law. Nor is collecting and selling data on Americans. Privacy experts say the problem is that data brokers, people-search services like Nuwber and Onerep, and online reputation management firms exist because virtually all U.S. states exempt so-called “public” or “government” records from consumer privacy laws.
Those include voting registries, property filings, marriage certificates, motor vehicle records, criminal records, court documents, death records, professional licenses, and bankruptcy filings. Data brokers also can enrich consumer records with additional information, by adding social media data and known associates.
The March 14 story on Onerep was the second in a series of three investigative reports published here this month that examined the data broker and people-search industries, and highlighted the need for more congressional oversight — if not regulation — on consumer data protection and privacy.
On March 8, KrebsOnSecurity published A Close Up Look at the Consumer Data Broker Radaris, which showed that the co-founders of Radaris operate multiple Russian-language dating services and affiliate programs. It also appears many of their businesses have ties to a California marketing firm that works with a Russian state-run media conglomerate currently sanctioned by the U.S. government.
On March 20, KrebsOnSecurity published The Not-So-True People-Search Network from China, which revealed an elaborate web of phony people-search companies and executives designed to conceal the location of people-search affiliates in China who are earning money promoting U.S. based data brokers that sell personal information on Americans.
ZASTRZEŻENIE:
February's crippling ransomware attack against Change Healthcare, which saw prescription orders delayed across the United States, continues to have serious consequences. Read more in my article on the Hot for Security blog.
The international hotel chain Omni Hotels & Resorts has confirmed that a cyber attack last month saw it shut down its systems, with hackers stealing personal information about its customers. Read more in my article on the Exponential-E blog.
Police have successfully infiltrated and disrupted the fraud platform "LabHost", used by more than 2,000 criminals to defraud victims worldwide. Read more in my article on the Tripwire State of Security blog.
Take That's Gary Barlow chats up a pizza-slinging granny from Essex via Facebook, or does he? And a scam takes a sinister turn - for both the person being scammed and an innocent participant - in Ohio. All this and more is discussed in the latest edition of the "Smashing Security" podcast by cybersecurity veterans Graham Cluley and Carole Theriault.
Law enforcement officers in Zambia have arrested 77 people at a call centre company they allege had employed local school-leavers to engage in scam internet users around the world. Read more in my article on the Hot for Security blog.
The East Central University (ECU) of Ada, Oklahoma, has revealed that a ransomware gang launched an attack against its systems that left some computers and servers encrypted and may have also seen sensitive information stolen. Read more in my article on the Hot for Security blog.
Learn more about the DragonForce ransomware - how it came to prominence, and some of the unusual tactics used by the hackers who extort money from companies with it. Read more in my article on the Tripwire State of Security blog.
If 25 documents stolen is "very serious," I'm not sure the words exist to describe the 1.3 terabytes of data that Leicester City Council now says it has had stolen by hackers.
MPs aren't just getting excited about an upcoming election, but also the fruity WhatsApp messages they're receiving, can we trust AI with our health, and who on earth is pretending to be a producer for the Drew Barrymore TV show? All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by cybersecurity veterans Graham Cluley and Carole Theriault, joined this week by John Hawes.
Targus, the well-known laptop bag and case manufacturer, has been hit by a cyber attack that has interrupted its normal business operations. Read more in my article on the Hot for Security blog.
Two China-based Android app developers are being sued by Google for an alleged scam targeting 100,000 users worldwide through fake cryptocurrency and other investment apps. Read more in my article on the Hot for Security blog.
Google has issued a security advisory to owners of its Android Pixel smartphones, warning that it has discovered someone has been targeting some devices to bypass their built-in security. Read more in my article on the Tripwire State of Security blog.
New research has found that ransomware remediation costs can explode when backups have been compromised by malicious hackers - with overall recovery costs eight times higher than for those whose backups are not impacted. Read more in my article on th Exponential-e blog.
Google says it is deleting the your Google Chrome Incognito private-browsing data that it should never have collected anyway. Can a zero-risk millionaire-making bot be trusted? And what countries are banned from buying your sensitive data? All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by cybersecurity veterans Graham Cluley and Carole Theriault, joined this week by Host Unknown's Thom Langford.
Amazon failed to deliver an iPhone 15 to my home, but claims I am not eligible for a refund. Is there anybody at Amazon who still cares about looking after their legitimate honest customers?
The UK's Office for Nuclear Regulation (ONR) has started legal action against the controversial Sellafield nuclear waste facility due to years of alleged cybersecurity breaches. Read more in my article on the Hot for Security blog.
Security researchers find a way to unlock millions of hotel rooms, the UK introduces cyberflashing laws, and Google's AI search pushes malware and scams. All this and much much more is discussed in the latest edition of the "Smashing Security" podcast by cybersecurity veterans Graham Cluley and Carole Theriault, joined this week by T-Minus's Maria Varmazis.
The Qilin ransomware group has targeted The Big Issue, a street newspaper sold by the homeless and vulnerable. Spost on Qilin's dark web leak site claimed the gang has stolen 550 GB of confidential data from the periodical's parent company. Read more in my article on the Hot for Security blog.
Hardware wallet manufacturer Trezor has explained how its Twitter account was compromised - despite it having sensible security precautions in place, such as strong passwords and multi-factor authentication. Read more in my article on the Hot for Security blog.
Nemesis Market, a notorious corner of the darknet beloved by cybercriminals and drug dealers, has been suddenly shut down after German police seized control of its systems. Read more in my article on the Tripwire State of Security blog.
ZASTRZEŻENIE:
A new info-stealing malware linked to Redline poses as a game cheat called 'Cheat Lab,' promising downloaders a free copy if they convince their friends to install it too. [...]
American telecom provider Frontier Communications is restoring systems after a cybercrime group breached some of its IT systems in a recent cyberattack. [...]
The Hospital Simone Veil in Cannes (CHC-SV) has announced that it was targeted by a cyberattack on Tuesday morning, severely impacting its operations and forcing staff to go back to pen and paper. [...]
According to a joint advisory from the FBI, CISA, Europol's European Cybercrime Centre (EC3), and the Netherlands' National Cyber Security Centre (NCSC-NL), the Akira ransomware operation has breached the networks of over 250 organizations and raked in roughly $42 million in ransom payments. [...]
A legitimate-looking Google Search advertisement for the crypto trading platform 'Whales Market' redirects visitors to a wallet-draining phishing site that steals all of your assets. [...]
A preview of Microsoft Office LTSC 2024, a volume-licensed and perpetual version of Office for commercial customers, is now available for Windows and macOS users. [...]
LastPass is warning of a malicious campaign targeting its users with the CryptoChameleon phishing kit that is associated with cryptocurrency theft. [...]
The LabHost phishing-as-a-service (PhaaS) platform has been disrupted in a year-long global law enforcement operation that compromised the infrastructure and arrested 37 suspects, among them the original developer. [...]
A new Android banking malware named 'SoumniBot' is using a less common obfuscation approach by exploiting weaknesses in the Android manifest extraction and parsing procedure. [...]
In an ongoing Kubernetes cryptomining campaign, attackers target OpenMetadata workloads using critical remote code execution and authentication vulnerabilities. [...]
The financially motivated threat actor FIN7 targeted a large U.S. car maker with spear-phishing emails for employees in the IT department to infect systems with the Anunak backdoor. [...]
The U.S. Justice Department charged Moldovan national Alexander Lefterov, the owner and operator of a large-scale botnet that infected thousands of computers across the United States. [...]
ZASTRZEŻENIE:
Cerebral, Inc. has agreed to a stipulated order with the Federal Trade Commission (FTC) to resolve allegations of deceptive practices and improper handling of sensitive consumer information. Cerebral, Inc., an online healthcare service provider, faced charges from the FTC alleged misuse of consumer information and failure to adequately disclose the terms of its service charges, …
The post Cerebral to Pay $7 Million for Using Trackers on its Site appeared first on RestorePrivacy.
DuckDuckGo has introduced Privacy Pro, a premium subscription service that combines a VPN service, a personal information removal tool, and an identity theft restoration service into a single consolidated package that costs $9.99/month or $99.99/year. DuckDuckGo, known for its uncompromising stance on protecting user data and safeguarding privacy, previously marketed free privacy-focused services, including its …
The post DuckDuckGo Launches VPN Product Bundled With ID Protections appeared first on RestorePrivacy.
Google is rolling out a massive update for its Find My Device app, which adds Bluetooth-based crowdsourced tracking for Android devices and tags from multiple manufacturers. Find My Device is an app developed by Google that allows Android users to locate their lost or stolen devices and even wipe all data remotely. Previously, it primarily …
The post Google Adds Privacy-Minded Crowdsourced Tracking on ‘Find My Device’ appeared first on RestorePrivacy.
A malicious advertising campaign targets users searching for NordVPN on Microsoft Bing, infecting them with the SecTorRAT malware. Microsoft Bing is a search engine that has experienced massive growth compared to past years, partly thanks to the rise of the Edge browser, which uses it by default, and also its recently acquired AI capabilities that …
The post Trojanized NordVPN Installer Pushed via Microsoft Bing Campaign appeared first on RestorePrivacy.
Google has agreed to wipe billions of records it collected from over 136 million Americans, users of its Chrome browser, as part of a settlement for a 2020 lawsuit. The plaintiffs allege that Google falsely communicated that users of Chrome in Incognito mode are protected from data collection without consent, persistent tracking, and browsing activity …
The post Google Agrees to Delete Billions of Files Collected in Chrome Incognito appeared first on RestorePrivacy.
Nearly three years after RestorePrivacy first broke the AT&T breach by the prolific hacking group ShinyHunters, AT&T has finally admitted today that there was a breach. AT&T has determined that the data a threat actor published on a hacker forum two weeks ago is theirs, impacting 73 million current and former customers. AT&T is a …
The post AT&T Finally Admits Data Leak Impacting 73 Million Customers appeared first on RestorePrivacy.
Threat actor ‘IntelBroker’ has claimed yet another high-profile data breach, this time against Mashvisor, claiming to hold multiple user and agent databases exposing several hundreds of thousands of sensitive entries. Mashvisor is a real estate data analytics company that provides various tools and services to help investors analyze and find profitable traditional and Airbnb rental …
The post Hacker Claims Breach on Real Estate Data Analytics Firm Mashvisor appeared first on RestorePrivacy.
Atlas VPN has announced that it will be shutting down its services on April 24, 2024, due to rising costs and the challenges that arise from the highly competitive VPN market. Atlas VPN was a relatively new player in the field that attempted to capture the “budget” VPN audience, offering good services at a low …
The post Atlas VPN Announces Shutdown, All Users to Be Moved to NordVPN appeared first on RestorePrivacy.
Several free Android VPN apps have been found to support a malicious residential proxy operation named ‘Proxylib.’ Proxylib infects Android devices with an agent that conceals malicious activities such as ad fraud, bot usage, or more dangerous operations like malware distribution and phishing campaigns. The agent routes user traffic through the infected Android devices, making …
The post Free VPN Apps on Google Play Turn Phones into Proxies appeared first on RestorePrivacy.
Internet connections using the OpenVPN protocol can be easily identified by using DPI (Deep Packet Inspection) technologies and blocked with minor collateral damage. This result was presented in a technical paper published earlier this month by a team of researchers in the United States. The team performed a large-scale study involving a million users, demonstrating …
The post Study Shows OpenVPN Traffic Can Be Easily Identified and Blocked appeared first on RestorePrivacy.
ZASTRZEŻENIE:
Is your e-mail address compromised? Check it on this page.
In April 2024, the French underwear maker Le Slip Français suffered a data breach. The breach included 1.5M email addresses, physical addresses, names and phone numbers.
In March 2024, Canadian discount store Giant Tiger suffered a data breach that exposed 2.8M customer records. Attributed to a vendor of the retailer, the breach included physical and email addresses, names and phone numbers.
In April 2024, nearly 6 million records of Salvadoran citizens were published to a popular hacking forum. The data included names, dates of birth, phone numbers, physical addresses and nearly 1M unique email addresses. Further, over 5M corresponding profile photos were also included in the breach.
In March 2024, the independent fan forum Kaspersky Club suffered a data breach. The incident exposed 56k unique email addresses alongside usernames, IP addresses and passwords stored as either MD5 or bcrypt hashes.
In March 2024, the Indian audio and wearables brand boAt suffered a data breach that exposed 7.5M customer records. The data included physical and email address, names and phone numbers, all of which were subsequently published to a popular clear web hacking forum.
In February 2024, the paid survey website SurveyLama suffered a data breach that exposed 4.4M customer email addresses. The incident also exposed names, physical and IP addresses, phone numbers, dates of birth and passwords stored as either salted SHA-1, bcrypt or argon2 hashes. When contacted about the incident, SurveyLama advised that they had already "notified the users by email".
In March 2024, 1.3M unique email addresses from the online store for purchasing goods from China, Pandabuy, were posted to a popular hacking forum. The data also included IP and physical addresses, names, phone numbers and order enquiries. The breach was alleged to be attributed to "Sanggiero" and "IntelBroker".
In June 2023, the Tacoma-Pierce County Health Department announced a data breach of their Washington State Food Worker Card online training system. The breach was published to a popular hacking forum the year before and dated back to a 2018 database backup. Included in the data were 1.6M unique email addresses along with names, post codes, dates of birth and approximately 9.5k driver's licence numbers.
In March 2024, English Cricket's icoachcricket website suffered a data breach that exposed over 40k records. The data included email addresses and passwords stored as either bcrypt hashes, salted MD5 hashes or both. The data was provided to HIBP by a source who requested it be attributed to "IntelBroker".
In July 2022, the direct download website Exvagos suffered a data breach that was later redistributed as part of a larger corpus of data. The breach exposed 2.1M unique email addresses along with IP addresses, usernames, dates of birth and MD5 password hashes.
In August 2016, breached data from the vBulletin forum for GSM-Hosting appeared for sale alongside dozens of other hacked services. The breach impacted 2.6M users of the service and included email and IP addresses, usernames and salted MD5 password hashes.
In January 2019, the now defunct MMO and RPG game SwordFantasy suffered a data breach that exposed 2.7M unique email addresses. Other impacted data included username, IP address and salted MD5 password hashes.
In March 2024, millions of rows of data from the New Zealand media company MediaWorks was publicly posted to a popular hacking forum. The incident exposed 163k unique email addresses provided by visitors who filled out online competitions and included names, physical addresses, phone numbers, dates of birth, genders and the responses to questions in the competition. Some victims of the breach subsequently received ransom demands requesting payment to have their data deleted.
In March 2024, tens of millions of records allegedly breached from AT&T were posted to a popular hacking forum. Dating back to August 2021, the data was originally posted for sale before later being freely released. At the time, AT&T maintained that there had not been a breach of their systems and that the data originated from elsewhere. 12 days later, AT&T acknowledged that data fields specific to them were in the breach and that it was not yet known whether the breach occurred at their end or that of a vendor. AT&T also proceeded to reset customer account passcodes, an indicator that there was sufficient belief passcodes had been compromised. The incident exposed names, email and physical addresses, dates of birth, phone numbers and US social security numbers.
In September 2022, the online photo sharing platform ClickASnap suffered a data breach. The incident exposed almost 3.3M personal records including email addresses, usernames and passwords stored as SHA-512 hashes. Further, a collection of paid subscriptions were also included and contained names, physical addresses and amounts paid.
In September 2022, over 500k customer records from the Indian e-commerce service Flipkart appeared on a popular hacking forum. The breach exposed email addresses, latitudes and longitudes, names and phone numbers.
In August 2021, the Brazilian fast food company "Habib's" suffered a data breach that was later redistributed as part of a larger corpus of data. The breach exposed 3.5M unique email addresses along with IP addresses, names, phone numbers, dates of birth and links to social media profiles.
In September 2022, the Taiwanese Android forum APK.TW suffered a data breach that was later redistributed as part of a larger corpus of data. The breach exposed 2.5M unique email addresses along with IP addresses, usernames and salted MD5 password hashes.
In September 2022, the Russian e-commerce website Online Trade (Онлайн Трейд) suffered a data breach that exposed 3.8M customer records. The data included email and IP addresses, names, phone numbers, dates of birth and MD5 password hashes.
In March 2024, WoTLabs (World of Tanks Statistics and Resources) suffered a data breach and website defacement attributed to "chromebook breachers". The breach exposed 22k forum members' personal data including email and IP addresses, usernames, dates of birth and time zones.