you get important news and warnings about security and privacy on internet!
(Be patient – loading of this page takes few seconds.)
On this page, I give you the latest news, warnings and advice on the subject of security and privacy on the internet. You alone can take care of your own security and privacy and this requires some knowledge, strategy and constant vigilance.
(On the PRIVACY POLICY page, you will find my recommendations for a broad strategy to protect your computer from hackers.)
DISCLAIMER:

April marks Southwest Asia and North Africa (SWANA) Heritage Month, a time to recognize and celebrate the rich cultures, histories, and contributions of SWANA communities. At 1Password, we’re proud to highlight the people who bring these perspectives to life in our work and shape our culture every day.
This month, we’re spotlighting Kaynat Chowdhury, Customer Success Manager and Communications Lead for our SWANA Employee Community Group. We sat down with Kaynat to learn more about her career journey, her impact in Customer Success, and how community and belonging have shaped her experience at 1Password.
Can you share a bit about your career journey and what led you to Customer Success? Was this a path you always saw for yourself?
When I was in school in Bangladesh, I studied Science and then Commerce, then I came to Canada to get a Bachelor’s Degree in Sociology. All the while, I had no idea I was going to be in tech and in Customer Success. However, it really was the best decision and I feel that Customer Success found me more than I found it, and once I was in it, I realized it was a perfect fit. It combines everything I enjoy: building relationships, problem-solving, and making a real difference for the people I work with. Was this the path I always saw? To be honest, no! It’s quite hard to be an immigrant in a new country (I have been here more than a decade now) and truly know what path will be possible. You're just doing your best with what's in front of you. But I am so glad I stayed open, because Customer Success turned out to be everything I didn't know I was looking for.
As a Customer Success Manager, you work closely with organizations to help them get the most value from 1Password. How has that work evolved as we’ve expanded into areas like Unified Access, SaaS Manager, and EPM?
It has been incredible to see the reception our clients have with our product expansion from EPM into Unified Access and SaaS Manager. I have had the privilege of interacting across thousands of clients over the years and people really love our product and are curious about what we are building. This evolution is also allowing me to have much more strategic discussions with IT leaders and security teams about how 1Password fits into their broader security posture.
You’ve been at 1Password for four years and have seen the company evolve quite a bit. What’s felt most meaningful to you as that growth has taken shape, and what are you most looking forward to next?
Four years! I cannot believe it. When I think back to where I started versus where I am now, the growth has been remarkable – and not just for the company, but personally for me too. I went from Customer Success Representative, to Customer Success Manager, and now Customer Success Manager, Level 2. Watching 1Password evolve from a well-loved password manager into a comprehensive security platform has been genuinely exciting to be part of. The most meaningful moments have always been the human ones, though; the customers who tell you that your work made a real difference (which in my role, I get to hear a lot of) and the colleagues who show up for you every single day. Being part of a team like that is something I don't take for granted, and I want to continue contributing to that culture as we grow.
During your time here, we’ve also seen our inclusion efforts grow, including the launch of Employee Community Groups like SWANA. As Communications Lead for SWANA, what does your role involve, and how do you approach building connection and visibility for the community? 1Password's inclusion efforts have been wonderful to see and to be a part of. The love for my SWANA community and the amazing leads I share space with is truly unmatched. As Communications Lead, my role is really about making sure our community feels seen, heard, and celebrated, both within SWANA and across the broader 1Password organization. That means everything from crafting our messaging, to helping plan events and amplifying the stories of our community in ways that feel authentic and meaningful. What I love most about this role is that connection is at the heart of everything. The SWANA region is incredibly diverse, spanning so many cultures, languages, and experiences, and I think that richness is exactly what makes our community so special.
How has being part of the SWANA community shaped your experience at 1Password? Honestly, it has made me feel more at home. I already loved working at 1Password, but SWANA added a layer of belonging that is hard to describe. As someone who immigrated from Bangladesh, there is something really meaningful about having a space where your culture and your background are not just acknowledged but celebrated. It has connected me to colleagues I might never have crossed paths with otherwise, and some of those connections have become some of my most valued relationships here.
What would you say to someone from a background represented within the SWANA community who is considering a path in tech or cybersecurity today?
I would say: do not let the unfamiliarity of the industry intimidate you. When I was studying Sociology in Canada, I never imagined I would end up in tech. But here I am, and I genuinely love what I do. The skills you bring from your background, your ability to navigate different cultures, to communicate across differences, and to be resilient in unfamiliar spaces, are not weaknesses. They are strengths that this industry needs. Tech and cybersecurity need more diverse voices, more perspectives, more people who understand the world in different ways. The path may not always be clear, but the community around you will support you. Lean on it.
Kaynat’s story is a reminder that there’s no single path into tech – and that the perspectives we bring with us are often what make the biggest impact. Whether she’s building trusted partnerships with customers or fostering connection and visibility within the SWANA community, her work reflects the kind of care, curiosity, and leadership that drive both our business and our culture forward. As we celebrate SWANA Heritage Month, we’re grateful for the community Kaynat helps build and for the impact she makes every day in shaping 1Password as a place where people feel a true sense of belonging.
If you’re interested in joining us, explore open roles at 1Password.

When Anthropic revealed the existence of Mythos, the frontier AI model they deemed too dangerous for public release, the security community was alarmed. And it’s not hard to see why: Mythos is capable of detecting software vulnerabilities at a previously unimaginable scale, and autonomously crafting exploits to weaponize these flaws. According to Anthropic, Mythos created 181 exploits of Firefox in testing, ninety times more than the company’s previous model (Claude Opus 4.6).
The security world is facing down the prospect that soon, hordes of agents will turn the systems they rely on into Swiss cheese. But while concern is an appropriate reaction to this coming storm of vulnerabilities, panic is not. Instead, security and business leaders need to treat the next few months (which are likely all we’ll get before a Mythos-level model is widely available) as a precious gift: time to batten the hatches and prepare not just for a temporary crisis, but a permanently altered paradigm.
If there’s a silver lining to this storm cloud, it’s that it’s bringing the security community together to build collective solutions. As part of that effort, I was proud to contribute to The “AI Vulnerability Storm”: Building a “Mythos-ready” Security Program, a paper developed by Gadi Evron and Rich Mogull at the Cloud Security Alliance (CSA), CISO community, [un]prompted, SANS, the OWASP Gen AI Security Project, and a broad coalition of industry leaders.
This paper offers a roadmap for security leaders to make the most impactful changes at their organizations and work toward “Mythos-ready” resilience. Their recommendations combine AI-driven defensive capabilities, accelerated vulnerability operations, hardened core controls, updated risk models, and stronger cross-industry coordination to operate at machine speed and withstand continuous waves of AI-driven attacks.
The paper takes a broad look at how security can prepare for this new era of patch management – from how to use LLMs for code scanning to how to deal with security team burnout – but this blog focuses on my key takeaways. They reflect a shift in how defense actually works now that vulnerability discovery happens faster than any team can respond. In this world, the practical question is what an attacker can reach after initial access, and how far that access can spread.
In a Mythos environment, a flaw matters most when it leads to credentials, tokens, or keys that can be reused elsewhere. That is where incidents turn into breaches.
In the pre-AI world, vulnerability management was constantly compared to “whack-a-mole:” an unglamorous, tedious job that was never really finished. Now, the arrival of Mythos has made this old, piecemeal approach obsolete. As the paper says: “The window between discovery and weaponization has collapsed to hours. Attackers gain disproportionate benefit, and current patch cycles, response processes, and risk metrics were not built for this environment.”
The obvious implication of this shift is that organizations need to make serious investments in their discovery and remediation efforts, including employing LLMs to help identify and triage urgent needs. But, as the paper says, “we cannot outwork machine-speed threats.”
Trying to respond to every vulnerability will likely prove impossible, which means the real focus needs to be on containing the blast radius of any breach. More precisely, the goal is to ensure that a single exploit cannot be used to move across systems. And that means focusing on controlling access.
In practice, an exploit is usually the entry point of a breach, not its end state. What determines impact is the set of credentials or tokens available from that position, and whether they can be reused to access other systems.
Access makes the difference between an “incident” and a “disaster.” Human and agentic hackers alike are looking for opportunities for lateral movement, so they can use a vulnerability exploit as a beachhead for a larger attack.
Attackers are looking for:
Exposed API keys
SSH keys
Overpermissioned service accounts
.env files
Weak authentication methods (which runs the gamut from SMS codes to compromised passwords)
Bringing these secrets and credentials under control creates bottlenecks where defenders can contain breaches. Anthropic itself advises this approach: segmentation, strong authentication, and visibility over the entire attack surface. Their recommendations for preparing for a post-Mythos world include:
Adopt a zero trust architecture
Tie access to verified hardware rather than credentials
Isolate services by identity
Replace long-lived secrets with short-lived tokens
Decommission unused systems, since they tend to be unpatched
If you’re wondering how to protect your systems from vulnerabilities discovered by Mythos, the answer is about credential management as much as patch management. By centralizing every credential, from the .env files developers use to the service accounts agents operate, you create a “kill switch” for lateral movement.
As The “AI Vulnerability Storm” makes clear, agentic AI will be an indispensable tool in the fight against breaches, and the paper emphasizes the importance of getting the entire security team comfortable with using agents as soon as possible. But it’s equally important to build strong guardrails for agents throughout your organization. Here’s the upside: designing for good agents protects you from bad agents.
Any effort to secure agentic access must begin with discovery, since employees using shadow AI represent a glaring vulnerability. Agents and AI-based tools are vulnerable to prompt injection, can incorporate sensitive information into their training data, or contribute code that hasn’t been properly vetted or tested. Without proper training and tooling, employees (both developers and non-technical “builders”) might give their AI tools the same level of access they have themselves, rather than a scoped, least-privilege subset. And each time an employee gives an agent a hardcoded SSH key instead of a short-lived token, they create a path that could be used by an adversarial agent in a vulnerability exploit attack.
The challenge with agents is that they do not behave like traditional identities. They do not require interactive login and often run continuously without clear session boundaries or direct human oversight.
Instead of trying to make agents fit within existing IAM systems that were designed for human access, security leaders need to treat them as an identity class of their own, with unique authentication and authorization needs. This requires a shift away from static credentials, human approval for agent access, and enforcing strong, context-aware authentication, particularly for systems and workflows accessed programmatically. This not only reduces the likelihood of a malicious agent intruding, it also helps security teams quickly separate anomalous behavior from the background hum of “agents being agents.”
The idea of vulnerabilities going from “discovered” to “exploited” in hours is certainly worrisome, but the good news is that security practitioners are dealing with this problem as a united front; that’s what Anthropic’s Project Glasswing is all about. Preparing for this new reality will require a constellation of approaches, from how we test code to how we automate patches, and 1Password is ready to meet the moment by helping to secure access for humans and their agents.
Security programs that rely primarily on patch speed will struggle in this environment. Teams that adapt will assume compromise and design security approaches so that a single vulnerability does not expose access that can be reused across environments.
And the best time to start adapting is now.
Is your security program Mythos-ready? Learn more about how 1Password® Unified Access can help secure agent access.

Bob Lord has spent decades building and leading security programs, from early internet crypto work at Netscape to roles at Twitter, Yahoo, the Democratic National Committee, and CISA. In this episode of Chasing Entropy, he and host Dave Lewis get practical about why the security advice most people hear doesn’t match how real compromises happen.
Across secure-by-design, AI systems, and software supply chains, security breaks down when organizations treat outcomes like someone else’s problem.
When Bob talks about secure by design, he is deliberately not trying to write another technical framework. Plenty exist. His question is different.
If we already know how to prevent a long list of common issues, why do we keep shipping the same defects?
Secure-by-design breaks down when companies treat security as a feature or a compliance exercise rather than something they are accountable for delivering as a customer outcome.
Draw a line to quality and safety movements outside software, especially in automotive safety. Car companies used to compete on lifestyle and appearance, not safety. Customers did not know what to ask for. Manufacturers had little reason to prioritize safety until norms, regulations, and accountability shifted.
Software, in Bob’s view, is still in the pre-seatbelt era. We have normalized shipping unsafe components, building with unsafe processes, and delivering unsafe defaults. Then we act as if customers should be able to configure their way out of systemic risk.
From that lens, CISA’s Secure by Design work focuses on three principles:
Take ownership of customer security outcomes. Shipping a patch is not enough if you do not know whether customers update. Measure adoption and remove friction.
Embrace radical transparency. Make vulnerability handling easier, not adversarial. Build a real safe harbor for good-faith research.
Lead from the top. Meaningful change is driven by senior business leadership. You don’t delegate quality to the quality team, nor do you delegate security outcomes to security teams alone.
The AI section lands because it stays concrete.
Dave shares a story where an internal LLM was asked, “Who at the company doesn’t like me?” The system reportedly queried HR data and responded, highlighting that agentic systems can become permission amplifiers.
What changes in AI environments is not just the interface, but the speed and scale of access: systems can act across email, chat, HR, internal tools, and business apps faster than most access controls were designed to govern.
In many organizations, no single person can pull data from email, chat, and HR systems and fuse it into a targeted answer. But companies are increasingly giving AI systems broad access paths without mature roles, rights, and auditing. Then we try to patch over it with soft instructions like “don’t be evil.”
The takeaway is pro-accountability. If the system can take actions and surface sensitive conclusions, you need guardrails that reflect that power.
Open source comes up in the context of underfunded teams that cannot afford premium tooling. Bob agrees the constraint is real, but he pushes back on the industry habit of outsourcing responsibility. Constraints don’t remove accountability when insecure or unmaintained components make their way into customer-facing products.
If a defect ships in your product, it’s yours, even if it came from upstream.
He also calls out a common failure pattern: vendors using unmaintained dependencies for years, sometimes far longer, and not giving customers visibility into what is actually inside the product. SBOM practices exist. Some companies do this well. Many do not.
Whether the issue is insecure defaults, overpowered AI systems, or vulnerable dependencies, the pattern is the same: organizations cannot keep pushing security outcomes downstream and expect users, customers, or open-source maintainers to absorb the risk.
Subscribe to Chasing Entropy for honest, expert-led conversations on agentic AI, security, shadow IT, and extended access control from industry leaders.
Subscribe now
At 1Password, we approach security through simplicity. We are developing an agent identity architecture to simplify and enhance the security of AI agents, ensuring interoperability with existing systems. Our approach is built in collaboration with customers, partners, and the standards community.
As part of this work, we recently responded to NIST’s AI agent authorization paper. Our view is that agent identity is not a single problem. It is a set of challenges spanning identification, attestation, enrollment, authentication, and authorization for machine workloads with reasoning capabilities. The ability to reason is what sets AI agents apart from traditional machine workloads.
This post is the first in a multi-part series on why agent-driven systems require us to rethink identity to enable continuous authentication and authorization for reasoning agents, and how that shapes both our response to NIST and our own approach to agent identity.
Where traditional machine workloads have a “set and forget” policy, the nature of reasoning workloads means a static policy can become out of date as the agent interprets and takes its next action. Agents that automatically deploy software are a great example of this escalation chain. A deployment agent begins with access to QA resources, but its access needs evolve when tests pass and may then require access to production services.
The principle of Zero Trust maintains that you should provide only the minimum access needed, but infinitely evolving logic makes it difficult to apply the correct access for the lifetime of an agent process. This paradox is what sets agent workloads apart and makes them more challenging than traditional machine actors. An identity and access management architecture for agents more closely matches the needs of a human rather than a traditional machine workload, but that architecture needs to engage machines instead of human actors. Simultaneously, an agent identity architecture must apply Zero Trust principles in real-time.
Existing identity and access management (IAM) protocols address some agentic requirements, and they are a practical starting point for maintaining interoperability. At the same time, approaches built on federation or on cryptographic trust anchored to a central authority can introduce performance overhead and added complexity as autonomy increases. These tradeoffs are reasonable in the near term, particularly as the ecosystem continues to mature. Over time, the direction should move toward identity standards that reduce coordination costs and provide a more direct path to fully autonomous identity verification.
Digital identity has taken many forms over the years, but it is easier to understand through its issuer. Operating systems, directories, and federation all tie an identifier back to an authoritative source. An issuer provides a cryptographic guarantee that an identifier is a trusted identity, and any system that trusts the issuer can trust the identities tied to it. Digital identity is cryptographically bound to the issuer, meaning there is no (trusted) identity without a trusted issuer.
At their core, an identity is a collection of attributes about an identity that others can verify. In the same way a web browser validates a domain by verifying a certificate’s signature against a trusted public key, a relying party validates a digital identity by checking its signature against the issuer’s public key. This verification process creates confidence that the entity is who they say they are and is authorized to act within a fixed scope.
Non-cryptographic signals, such as where a process is running, who initiated it, and other provenance data, provide context that can be evaluated alongside, or in some cases independently of, a trusted issuer. This is the basis of attestation, where verifiable evidence about a workload is used to establish trust and, in many systems, to bootstrap enrollment into an issuing authority.
Attestation is a key part of the agent identity challenge because it enables issuers to automatically, in real time, bind an AI agent workload to an identity without human intervention. Automatic identity generation is critical for enabling and preserving autonomous systems and, therefore, allowing agents to operate more securely without humans in the loop.
Automatic identity issuance also enables continuous enforcement of Zero Trust policies. Each attestation produces fresh, verifiable evidence of the workload, which can be used to dynamically adjust access. Instead of granting standing permissions, access is derived from the most recent attestation and constrained to what is justified at that moment. This is a real-time application of the Zero Trust principle, and is a first-order requirement for any agent identity framework.
In our feedback to NIST, we “recommend that Zero Trust Architecture (line 144) be a hard requirement for any solution NIST designs and accepts.” Prompt injection attacks are increasingly common, and we must accept that any framework securing a system susceptible to this broad threat must be treated as compromised by default. Zero Trust policy must be applied in real time, as close as possible to each agent action, with as little human intervention as possible. It must set the default path to the secure path, and the secure path must be the automated path.
The Zero Trust requirement is relevant to NIST’s framing of agent use cases. In our feedback, we recommend “splitting the use case on line 169, Enterprise AI Agents for Software Development and Deployment, into two separate use cases. The threat model for using an agent to develop software is very different from deploying software in production systems.” Generating code and taking action on production systems are two different trust domains. When an agent has access to customer data, infrastructure, or sensitive configurations, including API keys, a real-time Zero Trust system becomes even more relevant.
Agent identity requires a model of authorization and authentication that can adapt in real time as agent behavior changes. 1Password is one of many organizations working to address the challenges of agent identity and access management, and meaningful progress depends on collaboration across the ecosystem. We are working with partners across foundation model providers, standards bodies, and emerging startups to shape an approach that is comprehensive, practical, and interoperable.
We encourage readers to review NIST’s work on AI agent authorization and to follow emerging drafts from the IETF and W3C. These efforts offer early visibility into evolving protocols and help clarify where the industry is converging.
From our perspective, advancing identity in this space will come through shared development rather than a single defining solution. Progress will depend on contributors aligning around architectures that support a range of enterprise, government, and consumer use cases. We welcome engagement from others working in this area, as well as perspectives that challenge or refine this approach.
See how 1Password® Unified Access helps secure the next layer of AI security by governing how access is used at runtime.
Learn more
Most organizations already have the policies they need in place. The problem is enforcement.
Employees must complete security awareness training, contractors must acknowledge updated agreements, and teams must meet compliance requirements. But the systems that track these requirements rarely connect to the systems that control user and device access. As a result, access is granted even when required conditions haven’t been met.
That’s why we're excited to announce that 1Password Device Trust can now take signals from other systems into account before allowing users to reach sensitive company apps and data.
Until now, 1Password Device Trust focused primarily on device telemetry. That meant administrators could block employees from accessing company resources if their device failed to meet certain requirements, but they couldn’t enforce compliance based on signals that live outside of the device. With the ability to create custom External Checks, that changes.
Access to protected apps can now depend on:
User compliance status
Policy acknowledgments
MFA enrollment status
Active employment status
Many other external verification signals
Access decisions are no longer limited to what’s happening on the device. They reflect whether the user of the device has met required conditions across systems.
Administrators configure an External Check by connecting Device Trust to a third-party system via API. That external system becomes a source of truth for a specific requirement, such as whether a user has completed training or acknowledged a required policy.
When a user attempts to access a protected application:
Device Trust evaluates device posture as it does today.
Device Trust sends a request to the configured external system.
The external system returns a simple result: pass or fail.
Device Trust incorporates that result into the overall access decision.
If the check passes, access proceeds normally.
If the check fails, Device Trust can block access according to the policy defined by the administrator, while providing end users with custom remediation instructions so they know exactly how to resolve the issue.
This keeps enforcement centralized in Device Trust while allowing organizations to rely on their existing HR, training, or security systems as sources of truth.
Zero Trust is about verifying that only the right user, on the right device, under the right conditions, can access the right application. External Checks help organizations move closer to that model by connecting the disparate systems that are already in place to make a more informed access decision.
By bringing identity and compliance signals into Device Trust, security teams can reduce gaps between compliance systems and real-world access. Ready to set up External Checks for your organization? Check out our documentation here to get started.

To support enterprise workflows like monitoring systems, triaging support tickets, and automating routine work, AI agents need access to the same sensitive systems employees use, including databases, APIs, SaaS tools, and internal infrastructure. However, many of these systems still rely on shared passwords, API keys, tokens, and other credential-based access paths that are difficult to manage and control.
As organizations put agents to work for new use cases and in new environments, IT and security teams need a better way to manage the credentials and secrets agents need, without embedding them in code, configuration files, and internal tools.
Together, Natoma and 1Password offer organizations a secure, scalable way to integrate AI agents into enterprise workflows where credentials are centrally managed, and agent access is governed across necessary tools and systems.
Traditional IAM secures human access at login but doesn’t extend to the shared credentials, secrets, and service-level access paths AI agents operate within. Without a secure, centralized model, organizations can lose visibility into how sensitive access is being used, who or what is using it, and whether controls are being applied consistently over time.
When secrets are embedded directly into code or agent configurations, they’re not governed at the point of use. Agents can invoke the same credential repeatedly across workflows, pass it through downstream tools, or continue using lingering access in ways that are difficult to scope, monitor, or revoke effectively.
1Password helps close that gap by keeping credentials and secrets centrally managed and available for runtime retrieval. Natoma builds on that foundation by brokering and governing how agents exercise that access, so organizations can maintain control without embedding secrets directly into code or agent configurations.
Natoma and 1Password work together to secure how AI agents access enterprise systems. 1Password keeps credentials and secrets centrally managed. Natoma brokers and governs how agents use that access inside enterprise workflows.
Here’s how it works:
A user connects their 1Password vault to Natoma
Credentials and secrets needed by agents remain stored in 1Password
When an agent needs access, Natoma retrieves the appropriate secret reference at runtime
Natoma brokers and governs the interaction between the agent and the target system
This eliminates the need to store credentials in code or configuration files, reducing secret sprawl. With sensitive access under centralized control, organizations get a more governable way to deploy AI agents and monitor access to sensitive enterprise systems.
Ravi Chinni, Global Head of IAM at S&P Global, highlights the value of interoperable solutions like Natoma and 1Password:
What’s exciting about partnerships like this is their potential to strengthen the broader ecosystem, not just solve a single access challenge. As AI agents become more embedded in enterprise operations, organizations will need interoperable approaches that bring together credential protection, policy governance, and auditability across platforms.”
As more teams adopt AI agents, the number of system connections grows quickly. Organizations can also define policies that control how agents interact with systems, such as:
Allow read-only database access
Block write operations
Restrict access to sensitive tables
Limit query rates
Scope permissions by agent or user group
This gives teams a clear view of how agents interact with enterprise systems and a consistent way to govern access as adoption grows.
To scale AI securely, organizations need more than connectivity. They need a governed way to manage how agents access systems, use credentials, and interact with sensitive data.
Natoma and 1Password provide that foundation by keeping credentials centrally managed in 1Password. They give organizations a way to operationalize and govern how agents use that access through Natoma.
Organizations can scale AI agent access with stronger control, better auditability, and fewer secrets scattered across the business.
Contact us to see how Natoma and 1Password can help you securely connect AI agents to enterprise systems.
Contact us
Most organizations can tell you which apps sit behind SSO. Far fewer can tell you what other apps teams are using, or who has access to the credentials.
Shared and sensitive non-SSO logins remain some of the hardest access paths to govern. Credentials are often tied to individuals, scattered across vaults and browsers, and difficult to rotate or revoke when roles change. For many teams, this creates a gap in their Zero Trust strategy.
For the last several months, we’ve been hard at work connecting 1Password Enterprise Password Manager and SaaS Manager to help close that gap. Today, we’re announcing several integrated features that help IT admins discover and govern shared and sensitive logins.
Want to see how these integrations work in action? Check out our self-guided, interactive demo.
Try the demoFor more than a decade, 1Password Enterprise Password Manager (EPM) has helped thousands of businesses securely store and manage credentials and secrets. More recently, SaaS Manager has helped organizations discover shadow IT, manage employee access, and control SaaS spending.
Now, we’re bringing these solutions together.
When customers use Enterprise Password Manager and SaaS Manager together, they gain new capabilities:
Vault insights: Discover SaaS accounts from 1Password vault credentials for better IT visibility into sensitive and shared app use.
Browser insights: Reveal login activity from the 1Password browser extension to show app usage, even when credentials aren’t saved in a company vault.
Account risk report: Identify high-risk accounts based on access risk, data sensitivity, privileges, and attack patterns.
Account governance: Transfer control of sensitive accounts to IT to enable secure access control and auditability without exposing passwords.
Together, these capabilities extend Zero Trust governance beyond SSO and ensure that organizations can discover and secure credential-based access.
Now you can see exactly which applications rely on traditional credentials and understand who uses them and how.
When an employee changes roles or leaves, teams can update or revoke access and rotate credentials. No more searching through vaults, chasing down shared logins, or wondering if old passwords are still active.
Every access change and credential rotation is automatically logged, giving compliance teams defensible records for frameworks such as SOC 2, ISO 27001, and HIPAA. What once took hours of manual follow-up becomes fast, consistent, and fully visible.
By combining credential management, SaaS visibility, and access governance, 1Password helps organizations apply Zero Trust principles across more of their real access environment, not just the apps behind SSO.
If you already have EPM and SaaS Manager, you can enable these capabilities in your environment now. Reach out to your account manager or contact us to learn more.
If you use only Enterprise Password Manager, adding SaaS Manager unlocks the broader set of governance and visibility capabilities. Contact us to learn more and explore how SaaS Manager can help your organization.
The future of access isn’t about forcing every app behind SSO. It’s about meeting teams where they work and governing access responsibly as they grow.

At 1Password, our mission is simple: to protect people’s most critical information, their credentials. At the time of writing this post, I personally have 291 items in my vault, so the long-term confidentiality of this data is critical to myself and every 1Password user. We are thrilled to announce the first major milestone in our post-quantum cryptography (PQC) journey, the successful deployment of PQC on 1Password’s web application. If you’re using a PQC-capable browser, such as Chrome or Firefox, your data is protected today with no action required.
The threat of a large-scale quantum computer, sometimes referred to as a cryptographically relevant quantum computer (CRQC), is its potential to break the public-key cryptographic algorithms. These algorithms are used in most communication protocols and digital signature schemes. While it's unclear that a quantum computer powerful enough to break the public key cryptography will ever exist, we are not waiting for one before taking action to protect your data.
“Harvest now, decrypt later” attacks are a practical concern where adversaries intercept and store encrypted traffic today with the intention of decrypting it in the future, once quantum computers become powerful enough. We are putting protections in place now to ensure the long-term confidentiality of our customers’ data well into the future.
This is the first step in our long-term plan to protect customer data and withstand harvest-now, decrypt-later attacks. We will provide updates in the future as we migrate other parts of our infrastructure to support PQC, as we firmly believe that cryptographic designs should be done in the public.
We began our PQC rollout where it matters most for long-term confidentiality: internet-facing traffic. When a browser connects to 1Password, it establishes a TLS session using public-key cryptography to negotiate encryption keys. Historically, that key exchange relied solely on classical algorithms like elliptic curve cryptography. While secure against today’s computers, those algorithms may be vulnerable to sufficiently powerful quantum computers.
With this launch, 1Password now supports hybrid post-quantum key exchange (X25519MLKEM768) for all 1Password web application connections. When a compatible browser connects, it negotiates a TLS handshake that combines classical cryptography with a quantum-resistant algorithm (such as ML-KEM). This hybrid approach preserves compatibility while adding protection against future quantum adversaries. This all happens automatically; there are no configuration changes or performance penalties.
If you’re using a modern browser, such as Chrome, you can verify this yourself.
Open your browser and navigate to your 1Password account (for example, https://my.1password.com).
Under settings, navigate to More Tools -> Developer Tools.
Select the Privacy and Security tab.
View the Security Overview and note the connection uses X25519MLKEM768

If PQC is being used, you’ll see a hybrid key exchange (X25519MLKEM768). PQC depends on browser support, so results may vary depending on version and configuration. If you do not see PQC being negotiated, please update your browser and double-check other test websites such as https://pq.cloudflareresearch.com/
This milestone represents the first phase of a broader post-quantum roadmap at 1Password. We are focusing on the parts of our architecture that are most at risk of HNDL attacks to preserve long-term confidentiality. We will provide future updates and more technical details as we expand our PQC coverage across our products.
At 1Password, our responsibility is to protect your data, not just against today’s threats but tomorrow’s as well.

Every year, security and tech leaders come to the RSA conference in San Francisco to take the industry’s pulse, and every RSAC tends to be dominated by a single, overarching theme. Last year, the theme was: “AI agents are coming, and governance isn’t ready.” And sure enough, the theme of RSAC 2026 was: “AI agents are here, and governance needs to catch up.”
Throughout the conference, security practitioners, vendors, and analysts were all asking the same questions:
How can we enable a culture of agentic AI builders, without compromising on bedrock security principles?
How can we mitigate the potential for AI agents to behave unsafely, either via malicious compromise or their own nondeterministic nature?
What are the most impactful safeguards every organization should be putting into place to secure AI and automation in the next year?
1Password provided answers to those urgent questions at RSA. Prior to the event, we announced the release of 1Password® Unified Access, a new platform that helps teams discover, secure, and audit access across humans, agents, and machine identities, so organizations can adopt AI confidently and securely.
At RSA, 1Password leaders spoke on panels, met with customers, discussed what's next in agentic security with industry analysts and press, and demoed our products for booth visitors. Here’s a look at the highlights.
The 1Password booth was buzzing with RSAC attendees eager to learn about how our latest product releases could address their security needs. They experimented with interactive demos of our products, which you can check out for yourself:

Monday night, 1Password leaders hosted a customer appreciation happy hour, where everyone enjoyed the chance to unwind, swap stories, and discuss the shared future for 1Password and our customers.
Throughout RSAC, 1Password hosted sessions at our offsite space featuring company leaders and industry peers. On Tuesday, the theme of the talks – both at the 1Password space and the convention center – was breaches: how to prevent them and how to respond when you’re faced with one.
Over in the convention center, Wendy Nather, 1Password’s Senior Research Initiatives Director, gave two talks about breaches. In the first, playfully titled “Less blood, more bits: Learning from “near misses” in cybersecurity,” she talked with Bob Lord, Head of the Consumer Working Group at Hacklore.org. They shared real-world examples of how close calls can be a blessing in disguise for security professionals. In the second session, she discussed how IAM infrastructure can be a helpful incident response tool, even if it requires some hasty retooling.
This session featured a lively conversation between Dave Lewis, 1Password Global Advisory CISO, Nick Fohs, Senior Manager of Enterprise Systems & Security at Reddit, and Ryan Berckmoes, Systems Analyst II at FranklinCovey. They discussed:
The compounding challenges of SaaS and credential sprawl
Balancing the excitement and anxiety of developers adopting agentic workflows
The need for human-centric design to foster secure employee behavior

When we look at shadow AI and shadow IT, it’s never adopted out of a sense of malice. These are people literally just trying to get their jobs done. So how do we solve that problem? How do you distinguish acceptable experimentation from risky, unmanaged SaaS applications without being a blocker? You want to try and raise all boats.” - Dave Lewis, Global Advisory CISO, 1Password
Agentic AI permeated every conversation at RSAC 2026, and it was the primary focus of Wednesday's events. Leaders from some of the most trailblazing companies in AI joined 1Password for discussions that ranged from philosophical to highly technical.
In this fireside chat, Jacob DePriest, 1Password CISO and CIO, Sanjay Ramnath, VP of Product Marketing, and Francis Odum, SACR cybersecurity analyst, discussed the rapid evolution of identity security, and what needs to change in order to keep up.
They unpacked about how the access needs of AI agents and NHIs are breaking the traditional login-based authentication model and creating blind spots for security leaders. Francis Odum particularly emphasized the need for the C-Suite to invest in solutions to this problem, rather than waiting for a potentially devastating breach.
I think that the traditional model of login tied to a human is going to just go away. I don’t think we’re going to see that anymore... I think we’re going to see a more continuous model even for humans." -Jacob DePriest, CISO and CIO, 1Password
Next, 1Password’s Jeff Malnick, VP of Engineering, Developer & AI, and Jason Meller, VP of Product, were joined by Travis McPeak, CISO of Cursor, and Tal Peretz, founder of Runlayer.
They shared anecdotes of how they’re using agentic AI in their own workflows, considered the future of human vs AI code review, and debated whether AI agents could be considered tools or actors. The eventual consensus was that regardless of how autonomous an agent is on a philosophical level, the ultimate responsibility for their outcomes remains with humans.
I use agents all the time to not only diagnose things but to actually fix them. And I approve every single one of those commands… The reason I approve every single command is if my agent goes nuts or gets prompt injection, like, deletes prod, nobody in the business is going to be under any illusion that it’s the agent’s fault. That’s 100% Travis’s fault.” - Travis McPeak, CISO, Cursor
In this session, Nancy Wang, CTO of 1Password and Fotis Chantzis, Agent Security Lead of Open AI examined why over-permissioning becomes exponentially riskier with always-on AI agents, and how to design time-bound, contextual access controls that enforce security at runtime.
This conversation delved into the details of how to mitigate over-permissioning and context leakage through technical safeguards that ensure agent permissions are timebound, auditable, and tightly-scoped.
The moment that any kind of secret material, credentials, passwords, whatever, that you consider sensitive are part of the context window of the agent, then it’s sort of game over.” – Fotis Chantzis, Security Lead, OpenAI
After a day of deep thinking, everyone was happy to cap things off with an after-conference gathering, where leaders from 1Password, Felicis, and Abnormal networked with RSA attendees.
1Password closed out RSA with more engaging conversations on the conference floor and our ancillary space.
This expert-led discussion on securing the next generation of autonomous AI agents included a host of security leaders: Dave Lewis Global Advisory CISO, 1Password; Steve Ragan, Principal, AI Security Advisory, 1Password; Ryan Marshall, AI Researcher, 1Password; and Rich Mogull, Chief Analyst, CSA.
We’ll end the RSA recap with some reflections from 1Password CEO Dave Faugno on the challenges of managing identities in a world of AI agents. Ultimately, 1Password is uniquely situated to solve those challenges based on the foundation we’ve already built: trusted, secure vaults, our presence on millions of endpoints, and a commitment to balancing security with simplicity.
While this year’s RSA may be over, the months ahead are going to be action-packed for 1Password, as we accelerate our mission to secure access for humans and their agents. The release of Unified Access marked a major milestone for our company, and more are on the way. Stay tuned.
Want to learn more about how 1Password is governing non-human access? Join the AI access shift webinar.

If 2025 was the year of AI adoption, 2026 is when AI evolves from a software story to a people story. Katya Laviolette, our Chief People Officer, explored this idea in a recent Forbes article about how 1Password’s internal network of AI Champions is shaping this evolution and helping us set the standard for how we use AI to drive impact across 1Password.
AI tools help us move faster, but it takes curiosity and judgement to unlock their full value, build new ways of working, and to deliver meaningful outcomes for our teams and customers. That’s why we’re investing in a culture of AI fluency shaped by people across the business, brought to life through our AI Champions.
AI Champions are internal advocates for AI adoption who guide us as we make AI fluency, security, and experimentation part of our daily work. They’re critical thinkers from both technical and non-technical departments, including Product, Tech, Marketing, Go-To-Market, and Finance, who are passionate and proactive about shaping the future of AI use at 1Password and committed to helping others build that confidence too.
We know the best ideas come from diverse perspectives, and because AI touches every part of the business, we seek out representation across teams and levels. Anyone on our team can apply to join the AI Champions network if they’re excited to spark curiosity, build confidence, and model responsible AI use, empowering colleagues to approach AI with curiosity, critical thinking, and impact.
At the heart of the program is a simple goal: build internal AI fluency, excitement, and momentum across 1Password.
AI Champions put that into practice by bringing AI learning to life in practical and useful ways across the business. They share specific use cases, contribute to AI workflow integration, and engage in peer learning through discussions, user groups, and office hours. They also play a critical role in enabling enterprise tool rollouts. They experiment intentionally with approved tools, share success stories, add to our Notion-based AI Knowledge Repository, and drive attendance to learning events, driving better business outcomes and forward momentum.
Our AI learning strategy is led by our Learning and Development team, who ensures AI learning is sustainable and cohesive across the company. With AI Champions bridging the gap between enterprise strategy and daily practice, they serve as the grassroots, peer-to-peer channel driving adoption at the team level.
Adoption grows when people can see how something works in context, ask questions in a safe environment, and learn from peers they trust. At 1Password, this work is being shaped by the people closest to it, making AI fluency something we’re building together.
Our AI Champions are already turning momentum into tangible outcomes. Here are a few examples of what they’ve made possible.
1. A Finance AI Hackathon that turned manual work into practical automation
AI Champions on our Finance team organized a three-day hackathon that enabled 90 team members to trade manual spreadsheets for custom AI agents. The team built 21 agents that are now saving the team 5-10 hours per person every week.
2. A human-centered AI conversation that reached more than 300 team members
Featuring our Principal, AI Security Advisory alongside a cross-functional group of AI Champions, this session reinforced an idea we care deeply about: AI is core to how we deliver on our North Star, but meaningful adoption requires human judgement, confidence, and control. The panel shared practical ways to apply a human-in-the-loop approach and build with AI responsibly.
3. Launches of new enterprise tools
AI Champions are involved early in major rollouts, and that early involvement has been valuable in recent launches of new tools with agentic capabilities. They participated in proof-of-concept groups, evaluated functionality, and built their own fluency before broader rollout. When the tool launched more widely, many teams already had trusted peers in place to offer hands-on, context-specific support and help their peers get started with confidence.
These examples show what’s possible when AI is practical, and people feel ownership in the process. AI Champions helped create the peer-to-peer conditions within their functions for that work to take hold, with outcomes people could feel right away.
The energy behind this work is one of the best parts. Here’s what a few of our AI Champions are most excited about as they make space for more meaningful work:
Tré Bembry, Senior IT Engineer
“I’m excited to see how we continue shaping and refining how we use AI across our development teams, but equally across our non-development teams. As more tools become available, the real opportunity is making AI approachable for everyone. When we pair it with clear standards, it becomes more than a productivity boost. It reshapes how we learn and solve problems together.”
Qingling Oblinger, Senior Staff Tech Program Manager
“The most exciting gift of AI in 2026 for me is time. Real, reclaimed time. When AI handles repetitive routines like synthesizing updates and preparing for meetings, I get to show up differently. AI frees me to focus on strategic thinking and meaningful human connection. And when you engage it the right way, asking it to push back and challenge your thinking, it doesn't replace your mind. It sharpens it."
Audrey Wild, Manager, Service Support
“I am most excited about the enablement AI creates within training. It allows us to move beyond static materials toward interactive, personalized learning that adapts in real time. That shortens the path from knowledge to confidence. If used deliberately, AI can help teams build skills faster and apply them more effectively in their day-to-day work.”
Monique Mongeon, Staff Manager, Product Ops
“In Product Ops, my job is to chase and erase friction, wherever I can. Helping our teams leverage AI to take care of the busywork and give people back time to think about the meaty security problems our users face has been really rewarding. The outcome of all that thinking is what excites me, as we find new ways to help our customers use these tools in the most secure way.”
There’s no shortage of conversation right now about AI tools, but the real story inside organizations is more human. We’re interested in whether people feel empowered to experiment, equipped to think critically, and supported as they learn. We want to create the conditions for curiosity, confidence, and responsible innovation, and recognize that the most meaningful impact doesn’t come from technology alone, but from the people who shape how it’s used.
That’s what our AI Champions network is building at 1Password: a culture where AI is approached thoughtfully, collaboratively, and with purpose.
If this sounds like the kind of team you’d like to grow with, check out our open roles at 1Password.

Enterprise password managers (EPM) like 1Password, LastPass, Dashlane, and Bitwarden help you create, store, and fill strong passwords and credentials across different websites and apps, so you don’t have to remember or write them down. EPMs provide secure sharing, data encryption, and data breach prevention against phishing and malware, helping IT and Security teams protect and enforce policies around credentials.
While there are many EPM options, choosing the best password manager can be a challenge. A side-by-side comparison helps you see which is best for your organization’s cybersecurity strategy.
If you’re comparing 1Password and Keeper, it helps to start with what both products are built around: an Enterprise Password Manager (EPM). EPMs are how both platforms store, share, and enforce policies around credentials. They’re the foundation for each vendor’s broader security strategy. Below is a comparison of core features many organizations consider essential for protecting employees and their credentials.

Enterprise password managers store and encrypt users’ login credentials to discourage password reuse and insecure storage (like spreadsheets or sticky notes). They typically include a password generator and enable secure sharing among authenticated employees.
Both 1Password and Keeper cover these basics. The differences tend to show up in what’s included by default versus sold as add-ons. In many deployments, 1Password includes critical capabilities as part of the standard product, while Keeper packages some features as paid add-ons. 1Password also includes 20 guest accounts with every business plan, which can be useful for securely sharing vault access with contractors, auditors, or temporary collaborators. Keeper does not offer equivalent guest access to vaults; sharing is generally limited to users provisioned within the same account.
Both Keeper and 1Password EPM use AES-256 encryption. With either service, vault data is decrypted locally on the user’s device. Both operate using a zero-knowledge architecture, which means neither provider can decrypt your vault data.
1Password adds a 128-bit Secret Key in addition to the account password, creating a stronger model than relying on a single account password alone. 1Password also uses a password-authenticated key exchange (PAKE) protocol to protect the user's password andadd an additional layer of security to authentication.
Keeper claims that 1Password doesn’t encrypt at the “record level.” That’s a terminology difference: Keeper calls vault items “records.” Each 1Password vault item is encrypted individually.
Both Keeper and 1Password have the ability to alert users when a password for their account has been compromised and leaked on the dark web. 1Password’s service is called Watchtower, while Keeper’s is BreachWatch.
With 1Password, Watchtower alerts are included in your company plan, whereas Keeper’s BreachWatch costs extra.
Keeper describes their secrets manager as acore element of their PAM solution, but it’s a required add-on, even for enterprise tier customers of their password manager. And while Keeper supports a number of integrations, many are CLI or service-mode driven, requiring teams to deploy tooling, manage configurations, and maintain them over time.
1Password treats secrets management as a first-class workflow. It’s designed to reduce operational overhead, support secure .env workflows, and enable programmatic retrieval and injection of secrets across GitHub Actions, Kubernetes, and more.
Keeper and 1Password both conduct regular third-party security audits. Keeper, however, maxes out their bug bounty at$25,000.
1Password offers a bug bounty of up to $1,000,000.
On the certification front, 1Password holds ISO 27001:2022, ISO 27017:2015, ISO 27018:2019, and ISO 27701:2019, along with SOC 2 Type II attestation and a published security whitepaper. Taken together, these provide a level of externally validated assurance and documentation depth that many buyers look for during security review.
1Password includes a secure Travel Mode, which limits the amount of information stored on an individual device during travel. Only vaults marked “safe to travel” remain on the device.
The Associated Press has publicly described using 1Password to help protect journalists traveling to high-risk countries.
Keeper does not have a comparable travel mode.
When a Keeper user is locked out and can't answer their backup security question, the admin recovery process is lengthy: the locked-out user must be deleted, their vault rights transferred, a new blank vault created, the data manually moved, and the user re-provisioned — a workflow that can take 30 minutes or more.
With 1Password, anyone with recovery permissions can go to the user's profile, click "Begin Recovery," and the user receives an email to reset their account password and Secret Key. The entire process takes under two minutes.
Keepercharges for support, with prices increasing depending on the size or needs of your company.
1Password EPM, however, includes onboarding and customer support for any account over 75 seats; admins get the help of a dedicated team as they roll out the solution.
Keeper offers multi-tenancy via its MSP console and SCIM-based provisioning, but both introduce operational friction. Newly provisioned users enter a "Pending" state until encryption keys are exchanged, requiring a separate Automator service. SCIM can't natively assign Keeper Roles, and deprovisioning only locks a user's vault rather than removing it, leaving manual cleanup for admins. Group management relies on workaround prefixes rather than enforced policies.
1Password takes a different approach: multi-tenancy gives enterprises isolated child accounts under a single parent dashboard with centralized Policy Templates that enforce security baselines across every tenant. Hosted Provisioning runs inside confidential computing enclaves with no customer-hosted servers to maintain, validates every change against the identity provider before applying it, and delivers complete deprovisioning.
1Password is a strong choice for organizations that want to prioritize user experience and privacy. 1Password is designed to work on the devices people actually use, including unmanaged and personal devices, without relying on invasive monitoring. 1Password is transparent about what activity data it collects, which helps organizations improve security while maintaining user trust.
If you want to strengthen credential security across your workforce, please reach out to us.
Contact usTry 1Password free for 14 days and see how it can help your team secure access without slowing work down.
Try it free
Enterprise password managers (EPM) like 1Password, LastPass, Dashlane, and Bitwarden make it easy to create, store, and use strong passwords across websites and apps. With features like secure sharing, data encryption, and protection against phishing and malware, these tools help IT and security teams keep credentials safe and enforce company policies.
With so many EPM options available, choosing the right one can be difficult.Comparing features, security measures, and usability side by side can help you determine which password manager best aligns with your organization’s cybersecurity requirements.
If you’re comparing 1Password and LastPass, it helps to start with what both products are built around: an enterprise password manager that stores, encrypts, and helps manage credentials across your organization.
Both platforms cover the fundamentals: generating strong passwords, enabling browser autofill, and securely storing sensitive information such as login credentials and credit card details. But for IT and security teams, the differences show up in how each platform helps reduce credential risk across the business through security architecture, admin visibility, reporting, and operational overhead.
That’s because credential risk rarely starts with a dramatic failure. More often, it builds over time through everyday convenience: shared logins, unmanaged credentials, shadow IT, and access that persists outside of SSO and other centrally managed systems. At that point, the challenge is not just storing passwords securely. It’s gaining the visibility, control, and operational simplicity needed to secure credential-based access across the organization.
Below is a structured comparison of the key areas to evaluate password managers at scale.
1Password | LastPass | |
Two-Secret Key Derivation (2SKD) Security Model | Included | Not offered |
Guest accounts for EPM | Included | Not offered |
SIEM Integrations | Broad (CrowdStrike, Datadog, Splunk, Sentinel, more) | Limited (Splunk, Sentinel) |
Secure travel mode | Included | Not offered |
Built-in phishing detection | Included | Limited |
Secrets Management | Included | Added cost |
Multi-tenancy (parent-child accounts) | Included | Not offered |
Password managers are designed to eliminate risky behaviors like weak or reused passwords, sharing account logins, or storing credentials in spreadsheets and sticky notes. While these habits may seem harmless, over time they turn into a larger security gap, with more credentials, more unmanaged SaaS and AI tools, and more ways for access to persist after it should have been removed.
Both 1Password and LastPass allow you to:
Generate strong, unique passwords
Store credentials securely
Autofill logins across browsers like Chrome and Firefox
Sync across macOS, Windows, Linux, iOS, and Android
Vaults in 1Password provide flexible organization, with granular permissions for individuals, teams, and shared use cases.
For enterprise teams, password storage is only the starting point. The bigger difference is how each platform helps you secure and govern credentials across real-world workflows, especially when access happens outside IdP and SSO coverage. This is where visibility, flexibility in sharing, and admin control matter more. Security model and encryption
Both 1Password and LastPass use AES-256 encryption and operate under a zero-knowledge model, meaning neither provider can decrypt your stored data. The difference lies in how vault access is protected.
LastPass relies on a single master password, optionally combined with additional authentication factors. That means protection still depends heavily on that single shared secret.
1Password adds a second layer of protection with a device-generated Secret Key, combined with the account password using two-secret key derivation (2SKD). This strengthens the encryption model by requiring both components to unlock account data.
This means that even in the unlikely event of a breach, vault data remains protected without the Secret Key, so your 1Password data would be safe even in the event of phishing, brute-force attacks, or unauthorized access.
1Password also uses Secure Remote Password (SRP) in addition to the security-standard Transport Layer Security (TLS). SRP proves to the server that you know your account password and Secret Key. But, crucially, you never actually have to share them with the server, which prevents anyone from trying to steal that information in transit.
Both platforms provide ways to identify weak or compromised credentials. The difference is how quickly teams can turn those insights into action.
1Password includes Watchtower, which provides real-time visibility into:
Breached credentials
Weak or reused passwords
Vault-level password health
Watchtower surfaces these insights directly in the product, so both users and admins can quickly identify and remediate risks without relying on separate reports or workflows.
LastPass provides security and activity reports, but these are:
Manually generated
Delivered by email
Not real-time or continuously actionable
Reports may also expire after a set period, requiring additional admin effort to maintain visibility and compliance. For security and IT teams, the difference lies in visibility, reduced manual effort, and the speed at which you can identify issues and take action before they become incidents.
Security teams rely on centralized visibility across their security stack. 1Password provides:
Events Reporting via API
Broad SIEM integrations including CrowdStrike, Datadog, Splunk, Sentinel, and more
This allows teams to stream activity data, build custom alerts, and correlate password-related events with other security signals.
LastPass supports SIEM integrations, but with limited documented integrations (primarily Splunk and Sentinel), and less flexible reporting and event visibility
Phishing remains one of the most common ways credentials are compromised. Which means true credential security goes beyond storage to help users avoid credential misuse and unauthorized access.
1Password includes built-in phishing protection in its browser extension. When a user attempts to paste credentials into a suspicious or mismatched domain, 1Password displays a warning to help prevent accidental credential exposure.
LastPass provides general phishing protections and guidance, but its controls rely more on user behavior and extension usage than on proactive intervention.
Both platforms support credential sharing, but with different levels of flexibility and collaboration.
1Password offers:
Shared vaults for families and teams, useful for long-term collaboration
Secure sharing of individual items, even with people who don’t use 1Password
20 guest accounts with a business plan for securely sharing vault access with temporary collaborators.
LastPass offers:
Shared folders
User-to-user sharing
More limited options for sharing with non-users
LastPass does not offer equivalent guest access to vaults, and sharing is generally limited to users provisioned within the same account.
For teams working with contractors, external partners, or temporary contractors, flexibility in sharing can directly impact security and usability. Secure sharing should make collaboration safer without forcing people to work around it.
Alongside passwords, modern teams often need to manage infrastructure secrets that support developer and operational workflows.
1Password includes secrets management in the core platform, supporting API tokens, SSH keys, and developer workflows.
LastPass offers secrets management as a separate product, creating sprawl that requires additional tools and costs.
1Password offersTravel Mode, which removes sensitive data from your devices when crossing borders and restores it when you reach your destination. Only vaults marked “safe to travel” remain on the device.
This gives organizations an additional way to reduce unnecessary data exposure for employees traveling internationally, especially in higher-risk environments. The Associated Press has publicly described using 1Password to help protect journalists traveling to high-risk countries.
LastPass does not have a comparable travel mode.
Provisioning directly impacts operational efficiency, especially as organizations scale.
1Password provides automated provisioning built directly into the platform. Unlike bridge-based models that require maintaining separate infrastructure, Automated Provisioning hosted by 1Password requires no servers to deploy, no SCIM bridge to maintain, and no ongoing infrastructure burden.
By running provisioning within 1Password’s secure infrastructure, this approach reduces operational overhead while maintaining the platform’s zero-knowledge security model.
For growing organizations, simpler provisioning means faster deployment, less maintenance, and fewer moving parts to monitor and revoke through the employee and software lifecycles.
As organizations scale, governance becomes more complex. Security teams need processes that meet teams where they’re at and support productivity without creating more manual work.
1Password supports enterprise multi-tenancy with:
Parent and child account structures
Centralized policy enforcement
Delegated administration
This allows organizations to map security controls to business structure while maintaining consistency across environments.
LastPass supports enterprise deployments but not parent-child accounts, so they typically require more manual configuration to achieve similar segmentation.
For enterprises managing multiple business units, regions, or subsidiaries, this can make a meaningful difference in how easily security policies scale.
Both 1Password and LastPass improve security compared to unmanaged passwords, but the differences become clear when you look at how each platform reduces credential risk and manages access across the organization.
1Password is designed with a dual-layer security model that goes beyond a single master password, and it provides real-time, actionable risk insights through Watchtower. It also includes built-in phishing protection, flexible and secure sharing, integrated secrets management, and broad SIEM support, all without adding operational complexity through additional infrastructure or fragmented tooling.
That helps IT and security teams secure and govern credentials across everyday work, including shared, sensitive, and business-critical access that often sits outside traditional systems.
For organizations that prioritize visibility, control, and long-term scalability, these differences can have a meaningful impact on both security posture and day-to-day operations.
If you want to strengthen credential security across your workforce, please reach out to us.
Contact usTry 1Password free for 14 days and see how it can help your team secure access without slowing work down.
Try it free
We built 1Password® Unified Access to extend identity security beyond humans to the agents and machine workloads operating across your business. In practice, that means securing not just who gets access, but how agentic systems connect to tools, services, and data.
That makes the MCP gateway a critical control point. It sits between AI agents and the systems they need to reach, making it the natural place to enforce policy, visibility, and governance. But in many deployments, it also becomes the place where credentials accumulate, moving secrets out of the vault and into the platform.
That is the problem 1Password and Runlayer are solving together. With this integration, enterprises can keep their machine credentials in 1Password, resolve them only at runtime, and audit every fetch and rotation without exposing the secret itself.
If your team has adopted an MCP platform to centralize how AI agents access tools, you've probably solved one problem and created another.
Before the MCP platform, credentials were scattered across developer machines in plaintext config files:

After the MCP platform, those credentials shifted from the developer’s laptop into the platform’s database. This centralizes them, but still keeps them outside your vault. Exposure on the local machine decreases, while secrets sprawl and operational complexity increase. A better model is to keep these credentials alongside the rest of your secrets, in a system that is consistent, easy to use, and supports self-service for the AI builder.
Enterprises are using hundreds to thousands of upstream server connections like GitHub, Slack, Notion, Linear, and internal APIs. Each server needs at least one credential. When those credentials live outside the vault:
A platform compromise exposes every MCP server token at once.
There's no single source of truth for who changed what credential and when.
Enforcement of credential access policies lives outside a targeted, centralized solution.
As enterprises move from experimenting with AI tools to deploying them in production, this gap grows from a hygiene issue to a governance problem.
As we worked with Runlayer on this integration, we held to the same security principles that guide all of our AI work:
Secrets stay secret. Credentials live exclusively in the customer's 1Password vault. Runlayer stores a reference, never the raw value.
Least privilege and minimum exposure. The secret exists in memory only for the duration of the request. Nothing persists on disk or in the gateway's database.
Full Auditability. Every secret fetch and every rotation is logged in 1Password and in Runlayer with hash-based traceability that never exposes the credential itself.
Runlayer is the Enterprise Control Plane for MCP, Skills, and Agents. It sits between AI agents, IDEs, and MCP servers, proxying every tool call through a control plane that applies real-time security models and policies, logs complete audit trails, and centrally manages credentials. Think of it as the one layer for AI tool access, every MCP request flows through it, which makes it the natural place to securely inject credentials at runtime for agent workloads.
The integration uses a simple pattern: instead of pasting a raw API key into Runlayer's server configuration, you enter a 1Password secret reference. Runlayer resolves it at connection time.
In Runlayer's Server Inputs UI, enter an op:// reference for any credential field:

The UI shows a blue 1Password badge next to the field and displays "Managed by 1Password" as the status. The reference format follows the standard 1Password convention: op://vault/item/field.
This reference is all Runlayer stores. No raw credentials touch Runlayer's database.
When the MCP proxy handles a tool call, it scans transport headers for op:// references. For each match, it calls the 1Password SDK to resolve the live value and injects it into the upstream connection:

The resolution supports prefixed values too. A header configured as Bearer op://MCP/GitHub/token resolves to Bearer ghp_actual_token_value - the prefix is preserved, only the op:// portion is replaced.
Once the upstream request completes, the secret is no longer in memory. No caching to disk, no storage in the database, no residual exposure.
If someone rotates the credential in 1Password, Runlayer picks up the new value on the next connection. It detects changes by comparing the SHA-256 hashes of resolved values against prior fetches.
When a rotation is detected, Runlayer emits a secret_provider.rotated audit event with both the old and new credential hashes, enough to confirm the rotation happened without revealing the credential itself. The audit log entry reads: "New credential fetched from 1Password - zero downtime."
No config changes. No redeployments. No coordination between teams. Rotate in 1Password, and every agent picks up the new credential on its next request.
Runlayer's approach to AI credential management for MCPs aligns with how enterprises want to handle secrets - in the vault, resolved at runtime, with a full audit trail. The op:// pattern means credentials never leave 1Password until the exact moment they're needed."
— Andy Berman, CEO of Runlayer
op:// references in any MCP server credential field.
Live secret resolution at proxy time via the 1Password SDK.
Automatic rotation detection with zero downtime, hash comparison on every fetch.
Audit events for every secret fetch (secret_provider.fetched) and rotation (secret_provider.rotated).
UI badges showing which credentials are managed by 1Password.
Hash-based traceability, credential values are never logged, only their hashes.
This integration is the foundation for a deeper collaboration between 1Password and Runlayer. In the coming months, we plan to expand support for:
Coordinated rotation. Runlayer triggers rotation on a schedule or policy violation; 1Password updates the vault item automatically.
Full agent identity lifecycle. Creating an agent in Runlayer auto-creates a corresponding 1Password vault item; deleting archives it.
BYOV for OAuth tokens. Extend the op:// pattern beyond API keys to OAuth client secrets and refresh tokens managed through Runlayer's delegation flow.
MCP server credentials belong in your vault. With this integration, getting them there takes one change: replace the raw value with an op:// reference.

Identity establishes trust. The next problem is how that trust is used.
In June 2025, Microsoft patched EchoLeak (CVE-2025-32711), a zero-click vulnerability in Microsoft 365 Copilot that allowed an attacker to exfiltrate sensitive enterprise data, including API keys, confidential documents, and internal conversation snippets, without human intervention.
The attack was deceptively simple. An attacker sent a normal-looking email with hidden instructions embedded in it. A human would not notice them, but the model could interpret them. The email remained dormant until Copilot later pulled it into context for another task. At that point, the instructions triggered, and the agent used the victim’s existing permissions to retrieve and disclose sensitive information.
The specific vulnerability matters, but the broader lesson matters more. A system can authenticate correctly, authorize correctly, and still produce the wrong outcome.
Microsoft patched EchoLeak before it was publicly disclosed. Since then, researchers have identified similar patterns across AI-assisted workflows, including additional Copilot-related vulnerabilities in 2026. These are not isolated issues. They point to a broader, repeatable pattern.
When AI systems process untrusted content and act with user-level permissions, prompt injection and unintended data access become systemic risks rather than edge cases. This is not a failure of authentication or authorization. It is what happens after both succeed. Systems are behaving exactly as designed, and still producing the wrong outcomes.
Prompt injection isn’t just a model issue. It’s a signal that something is breaking between how systems reason and how they act.
The industry sees the same thing. OWASP Top 10 for LLM applications ranks prompt injection as a primary attack vector, and research from OpenAI and others shows that models cannot reliably distinguish between legitimate instructions and malicious ones embedded in external content. Robustness to these attacks remains an open problem.
That means even when an agent has the right identity and permissions, it can still be pushed into using them incorrectly.
In our AI Agent Security Benchmark, we evaluated how agents behave across common workflows. What emerged consistently was not a failure of login or access control in the traditional sense. Even with valid credentials and permissions, agents routinely acted outside their intended scope, accessing sensitive data, misusing tools, or following manipulated instructions.
In one case, models correctly identified a phishing page but still opened it, retrieved credentials, and entered them. The issue wasn’t whether access was allowed; it was how that access was used. This is the access trust gap.
Traditional enterprise security is built around two basic questions: who is this, and what are they allowed to access?
Authentication answers the first, and authorization answers the second. That model works when a few assumptions hold: a human is initiating the action, their intent is relatively clear, and everything happens within a defined session.
Agent workflows don’t fit that model: a single agent can move across multiple systems, use different credentials, and chain together actions in ways that aren’t always visible to the person who started the task. Intent is no longer fixed at the beginning of the session. It can shift during execution based on how the agent interprets the task, what it encounters, and what untrusted inputs get pulled into the workflow.
Identity still matters in this new world. You need to know what the agent is and who it’s acting for. Systems like OAuth and SAML provide that foundation. But identity answers a question that was asked before anything actually happens. It doesn’t tell you whether what’s happening now still makes sense.
Authorization has the same limitation. It checks scopes, roles, and whether access is valid. But that only tells you the agent can access something. It doesn’t tell you whether this specific action is appropriate.
Pulling model weights into an unmonitored environment, pushing code to a production branch at 2 AM, exporting data to an unknown endpoint. All of these actions can be allowed and still be wrong.
Traditional IAM question | What it governs | What it misses |
|---|---|---|
Who is this actor? (Authentication) | Identity verification at login | Whether the agent’s intent has been manipulated post-authentication |
What are they allowed to access? (Authorization) | Scoped permissions via tokens | Whether a specific action is appropriate given the current context |
How is authority exercised at runtime? | Agent actions | Whether this action should occur now, based on context and intent |
This introduces a new control boundary. It’s not just about who has access or what they’re allowed to do. It’s about how that access is actually used when something happens.
The gap between what’s allowed and what should happen in the moment is exactly what prompt injection exploits. Identity alone doesn’t close that gap.
Instead of making a one-time access decision and assuming it holds, systems need to evaluate access as work unfolds.
This means credentials are short-lived, and access is issued for a specific operation rather than granted as a standing permission.
Pattern | Example | What it does |
|---|---|---|
Short-lived cloud credentials | AWS STS | Issues temporary credentials scoped to a session |
Token exchange | OAuth 2.0 Token Exchange | Trades one credential for another with narrower scope |
Workload identity federation | SPIFFE / SPIRE | Binds identity to workloads, not static secrets |
Ephemeral SSH certificates | OpenSSH CA | Issues certificates that expire after use |
These patterns reduce standing privilege and narrow scope. But they don’t answer the harder question: should this action happen right now?
OpenAI describes similar principles in its system documentation, including the use of key management systems, role-based access controls, and time-bound access to sensitive resources.
The real challenge is deciding what should happen in the moment. Each action needs to be evaluated in context. Access may be valid, but still inappropriate.
This is where systems break down. Prompt injection doesn’t bypass access controls; instead, it works within them. The agent has permission, and the system is behaving as designed, but the outcome is still wrong.
Suppose an agent is granted read/write access to update customer records. The task is to update addresses. Instead, the agent modifies unrelated records based on flawed reasoning about data consistency. The action is allowed, but wrong in context.
When an agent invokes a tool, authenticates to a service, retrieves a credential, or performs an operation, that moment needs to be evaluated:
Requirement | Purpose |
|---|---|
Evaluated against policy | Does this action comply with current rules? |
Constrained by context | Is this action appropriate given the task, time, and target? |
Continuously enforced | Is the authorization still valid, or has something changed mid-chain? |
Recorded for audit | Can we reconstruct exactly what happened and why? |
You can already see this in practice. In a recent security advisory on AI-assisted browsing, 1Password highlighted risks that emerge when AI systems operate with user-level permissions over untrusted content.
The takeaway is simple: you cannot rely on the model to correctly interpret rules. Controls need to be enforced deterministically. At the browser level, that means domain-bound autofill and explicit confirmation for sensitive actions.
The same principle will apply more broadly. Security has to extend beyond login into execution.
The identity layer still matters. You need to know what an agent is, who authorized it, and what it’s allowed to do. But those are starting points, not the full answer.
Traditional IAM answers who and what. Agent systems require a third answer:
How is authority actually exercised at runtime, and should this action proceed, based on context and intent?

That is the next layer of AI security.
1Password Unified Access is built around that layer. It gives teams visibility into where agents and credentials exist across endpoints, browsers, and development environments. It secures that access through centralized governance, vaulting, and just-in-time patterns that reduce standing privilege. And it brings audit context closer to the moment access is exercised.
These controls don’t try to predict intent. They establish a foundation for governing how authority is used in practice as agent systems evolve.
The credential is the last mile of every agent action. It’s where an abstract permission becomes a real operation in a system. It’s also where trust is either maintained or broken.
Identity providers govern the front door. 1Password is building the layer that governs what happens after.
See how 1Password® Unified Access helps secure the next layer of AI security by governing how access is used at runtime.
Learn more
Hi! I'm Ollie Cheal, VP of Go-To-Market (GTM) in EMEA at 1Password.
If you’re exploring your next role in GTM, I’d love to give you a look at what we’re building here and why it’s such an exciting time to join. Right now, our mission is clear, the stakes are high, and our people are all in to win. With that in mind, let me share why this moment matters and what it takes to thrive on this team.
The biggest technological shift of our lifetime is happening right now, and 1Password is perfectly positioned to bring out the best for our customers. 1Password is helping organisations around the world to unlock productivity benefits without losing trust, safety, or control, as AI reshapes how work gets done, how decisions are made, and who (or what) gets access to our customers' data.
It's an enormous opportunity, and we have both the foundation and the products – from Unified Access and Enterprise Password Manager to SaaS Manager and Device Trust – to shape how the EMEA market navigates this evolving landscape. We have a large and growing user base in EMEA, and as we continue scaling this growth, I’m excited to keep hiring ambitious talent to help us build quickly towards our mission: 1Password protects identities and provides trusted access for people and agents. We unlock productivity by making security and privacy simple for every human and organization.
When I joined 1Password in 2024, I shared our vision: we were laying the foundation for growth in EMEA – bringing in an early dedicated team to set the standard for how we operate and building culture and behaviours that would let us scale the right way. I talked about three traits that I look for in any member of my team: drive, coachability, and role ownership.
In 2026, things look a little different at 1Password. We’ve evolved from laying the groundwork in EMEA to accelerating with a much bigger runway ahead. We’re focused on investing in the region and collaborating across defined teams like Sales, Marketing, and RevOps to move with speed and momentum. I still hire for the same key traits, but here’s how they have evolved today:
Drive: In this chapter, drive is about having a bias to action and the discipline to scale. I’m looking for people who choose to level up every day, mastering new products and sales motions to help us raise the bar together. Our work isn’t easy, and you’ll need to be intrinsically motivated to achieve excellence.
Coachability: As AI reshapes the pace of work, coachability will be your superpower. The scope of what we’re taking to market is quickly expanding, so we need people who seek feedback early, apply it quickly even in the face of ambiguity, and keep improving their craft every day.
Role ownership: This is perhaps the area that has changed the most since my last blog. AI has changed how we do almost every role and having the curiosity to embrace it has started to shine through as a key part of each role. We’ve also just made the announcement that we’re moving into securing access at run time so we’re now also looking for people who’ve had success in DevSecOps, but that’s not a requirement. Whether it’s curiosity or a niche skill set, we’re looking for readiness to lean in and make an impact.
As our high-performance culture has matured, we’ve also made the practices that help us thrive explicit. We call them the 1Password Behaviors for Success: take full ownership, proactively contribute, practice a growth mindset, be adaptable and resilient, and collaborate effectively.
What energizes me about scaling is the opportunity to keep growing a highly diverse team, one that’s motivated by the challenges ahead and has the skills to thrive in our high-performing environment.
We move fast, and we make that pace sustainable through initiatives like quarterly wellness days, professional development opportunities for everyone, and employee resource groups that champion a sense of belonging across the organisation.
We’re also being intentional about leadership in-region. We’re building an EMEA leadership team that brings different perspectives, backgrounds, and ways of thinking, because the best ideas come from shared purpose and an inclusive environment where everyone feels supported to do their best work. Today, we're proud to have strong representation from many employee resource groups, including Pride, Women at 1Password, and AfroBits.
Together, we're creating something special: a culture built on learning, inclusion, collaboration, ownership, and a shared commitment to building a safer, simpler digital future for our customers.
If what you’ve read has you energised, and you’re looking for a place that will challenge you, support you, and sharpen your craft fast, EMEA at 1Password is a phenomenal place to build your SaaS career.
You'll work in a multi-market environment with room to make an impact. You’ll be guided by leaders shaping the future of identity security, and you'll be part of the incredible team recognised as 1Password's Team of the Year in 2026.
Join our nearly 200-person team in EMEA and come meet us in this moment: explore open roles at 1Password.

Automating user provisioning sounds simple, until you remember everything that provisioning really touches.
For most SaaS products, SCIM is “just” user and group lifecycle management. Your identity provider calls an API, accounts get created, access is assigned, and offboarding removes it. But for 1Password, provisioning intersects with something far more sensitive: the cryptographic foundation that protects every vault.
1Password is end-to-end encrypted by design. We do not hold your encryption keys and cannot see your vault contents. This provides a powerful guarantee that even if 1Password’s servers were to be compromised, your data would remain unreadable, as the keys required to decrypt it are never accessible to us.
That model is why customers trust 1Password with their most sensitive credentials, but it also makes automation genuinely hard. After all, how can we automate provisioning inside a zero-knowledge platform, without reintroducing trust in the server?
That is the problem Automated Provisioning hosted by 1Password solves, and it’s why our approach is fundamentally different from other provisioning options on the market.
Most hosted provisioning solutions are built on a straightforward assumption: the service performing provisioning can be trusted to manage the system state, distribute keys, and act as an authority. This is what makes many SCIM implementations “easy” to host, but that assumption quickly breaks down in a zero-knowledge system.
If a server can create, hand out, or swap cryptographic keys without independent verification, then clients are ultimately trusting the server as a source of truth. Even if the threat is theoretical, the trust model is real, and even a theoretical gap matters for 1Password.
Public key cryptography is built around two keys: a private key that must remain secret, and a public key that can be shared freely.
When you share something securely, you encrypt it using the recipient’s public key, and only their private key can decrypt it. The difficulty comes in ensuring that the public key received actually belongs to the person in question.
In most systems, the server is meant to be the source of truth. However, it’s entirely possible that a malicious or compromised server could be used to swap in a fake public key and trick a user into encrypting data to be shared with an attacker instead of the intended recipient. The attacker could decrypt it, read it, then re-encrypt it for the real recipient so everything still “works.”
However theoretical that threat may be, for a product built on the principle of not requiring trust in our servers, we wanted to eliminate the risk as completely as possible.
Public Key Verification (PKV) is our answer to this threat. PKV gives every 1Password client the ability to independently verify that a public key belongs to its intended owner, and it’s powered by a new foundation called the Account Trust Log.
The Account Trust Log works as a cryptographically verifiable history of an account’s keys and trust decisions:
Every time a key is created or updated, that event is recorded as a new entry
Each entry includes the operation performed, a hash representing the account’s “world state,” and a cryptographic signature from an authorized actor
Each entry links to the previous entry, forming a tamper-evident chain
If a link is altered, the chain breaks and tampering is detectable
Now, clients and admins can verify key authenticity against the Trust Log, meaning that 1Password clients don’t have to trust our servers alone to distribute public keys correctly, and our servers can’t be used even theoretically to compromise public keys.
This marks a meaningful shift in our security model, and an essential step toward enabling automation at scale.
Historically, the way to preserve zero-knowledge and still support automated provisioning was to push sensitive operations into customer-controlled infrastructure. In 1Password, provisioning is tied to the creation and management of key material that enables secure access to encrypted vaults. This is why we built the 1Password SCIM Bridge, a self-hosted service that lives inside client environments and keeps sensitive cryptographic operations isolated from 1Password.
Unfortunately, 1Password SCIM Bridge can also come with real operational costs, and customers told us they wanted the same privacy guarantees without having to own the operational burden.
End-to-end encryption creates a paradox when it comes to letting 1Password EPM do more for clients, since it can see so little of their data.
Automated Provisioning hosted by 1Password resolves that paradox by moving provisioning’s sensitive cryptographic operations into an isolated execution environment built on confidential computing and using secure enclaves, which exists inside of 1Password’s infrastructure, .
In plain terms, the team at 1Password have built a hosted provisioning system where cryptographic operations can be performed. The secrets involved cannot be accessed, inspected, or extracted by 1Password, cloud operators, or anyone with privileged access to the surrounding infrastructure. This is a new architecture designed specifically for automation in a zero-knowledge system.
At the heart of Automated Provisioning hosted by 1Password is the secure enclave, an isolated compute environment designed to keep sensitive data protected even while it’s being used.
The enclave can perform cryptographic operations needed for provisioning, like generating encrypted key material and preparing account setup flows, but it cannot reveal those secrets to any outside party. For this to work, we also need strong proof that the enclave is running the exact code we intend, and that the rules we designed cannot be silently bypassed.
That is where attestation comes in. Before clients or services trust the enclave, they verify a cryptographic attestation measurement proving the enclave is running the expected, audited code. This makes the environment verifiable, not just isolated.
Automated Provisioning hosted by 1Password is the first major 1Password feature that brings together:
Confidential computing for operator-inaccessible cryptographic execution
Public Key Verification for client-side verification of key authenticity
The Account Trust Log for a tamper-evident, signed chain of trust
This combination is the core differentiator, and it’s the reason 1Password can host provisioning successfully.
When your identity provider provisions a new user, this initiates a secure sequence:
Your IdP makes a standard SCIM request to 1Password
The request is processed inside the enclave
Key material is generated and sealed, protected inside the enclave’s boundary
The operation is recorded in the Account Trust Log as a signed entry
The user completes a secure confirmation flow, ensuring only the intended user can activate and claim access
From there, clients can at any time independently verify keys and changes via PKV and the Trust Log.
Essentially, behind every “add user” action, many cryptographic operations are performed in isolation, recorded in a verifiable history, and designed so that no operator can tamper with or inspect the sensitive parts of the process.
One of the most important design choices we made at 1Password is that Automated Provisioning does not rely on implicit server authority. To enable Automated Provisioning hosted by 1Password, an administrator must explicitly delegate trust to the enclave using a service account, so that it can perform specific actions and write the appropriate entries into the Trust Log. This ensures that automation always remains rooted in an admin’s deliberate, verifiable action, not in “the server can do whatever it wants.”
Automation at scale must assume real-world failure modes, including identity provider compromise and misconfiguration. Automated Provisioning hosted by 1Password includes guardrails designed to reduce blast radius and prevent abuse, including:
Trusted email domains, cryptographically signed during setup to block rogue invitations
Scoped provisioning controls, so certain critical groups cannot be automatically managed
Attested environments, so provisioning actions occur only inside verified enclave code
Operator-inaccessible execution, reducing the risk of tampering or inspection
If you already use SCIM elsewhere, it can be tempting to view provisioning as a solved problem. But in a zero-knowledge platform, hosted provisioning isn’t just a matter of integration; it requires essential decisions about security architecture
Automated Provisioning hosted by 1Password is different because it does not ask you to trade privacy for automation. It preserves end-to-end encryption and removes the operational burden of self-hosting by combining:
Confidential computing for isolation and operator-inaccessibility
Verifiable cryptography through PKV and the Account Trust Log
Admin-rooted delegation so automation is accountable and constrained
Most provisioning solutions focus on simplicity by relying on an assumption of centralized trust. We instead committed to building simplicity without centralizing trust.
The key takeaway is that Automated Provisioning hosted by 1Password removes the need to deploy and maintain a SCIM Bridge. Cryptographic operations run inside an isolated, attested secure enclave, and Public Key Verification and the Account Trust Log make key authenticity independently verifiable. Secure automation is made possible by ensuring that it’s kept constrained and accountable through explicit delegation, not server authority.
The result is provisioning that scales without compromising the zero-knowledge model.
Automated Provisioning hosted by 1Password, multi-tenancy, and the Users API for Partners, were all made to serve the same goal; they make it easier than ever to deploy 1Password EPM wall to wall. Simplified provisioning, scalable structure, and predictable governance all help enterprises secure every team and workflow, while preserving the usability and zero-knowledge security model that 1Password is known for.
These capabilities mark another step toward what modern enterprises need most: security that scales with the business instead of slowing it down.

Agentic AI is changing how work gets done inside organizations. It’s embedded in IDEs and automation tools, and it’s showing up in browsers, internal workflows, and everyday productivity apps. Developers are using AI agents to accelerate engineering work, while knowledge workers are vibe coding apps without training on developer security practices, all of which create untenable risks for organizations. That shift has real implications for identity and access control. For years, identity security centered on login: authenticating the user, establishing a session, applying policy, and assuming authority for the duration of that session. That model worked for human access, but it breaks down when credentials are used by local AI agents, automation scripts, CI/CD pipelines, and AI-native tooling. In this new reality, authority shouldn’t be decided once at login and then trusted all day. It should be confirmed right when access is requested, every time a credential or secret is used.
That’s why we’re introducing Unified Access Pro, available today. It helps teams discover, secure, and audit access across humans, agents, and machine identities, so organizations can adopt AI confidently and securely.
Discover risk where traditional identity security systems can’t
As work shifts to AI agents and automation, more credentials are used outside the identity systems that security teams rely on. It happens on employee devices, inside local development environments, and across browser-based AI tools. Security teams often have little insight into what’s happening on employee devices and in the tools they use every day, where credentials are created, stored, and first used.
That gap matters. Exposed SSH keys, plaintext .env files, long-lived API tokens, and locally installed agents rarely appear in traditional SaaS logs or federated identity systems. Yet these credentials can grant direct access to production systems and sensitive data. As more employees experiment with AI agents, the volume of unmanaged credentials in circulation increases, expanding an organization’s attack surface.
Unified Access extends visibility to where risk often starts: employee devices. It discovers AI tools and local agent activity across devices and browsers, identifies exposed credentials, connects findings back to real devices and users, and guides end-user remediation for better security practices. It discovers risk early so organizations can address it before credentials and secrets are exposed, misused, or exercised at scale.
Discovery is only useful if it leads to action. Once you can see where credentials live and how they’re being used, the next step is bringing them under consistent control.
Unified Access centralizes credentials and secrets in a single, secure vault with consistent policies across humans, agents, and machine identities. It builds on 1Password’s enterprise vaulting foundation, trusted by more than 180,000 businesses and protecting more than 1.3 billion credentials and secrets. That includes employee usernames and passwords, as well as API keys, SSH keys, and environment files that developers rely on to connect systems and automate work. Instead of being scattered across local machines, configuration files, shared documents, and scripts, credentials are governed in one place and managed with consistent policy controls, even allowing security teams to take ownership of a credential and enable its use without ever exposing the secret itself.
As the lines between human and non-human access blur, the same credential might be used by an employee today and by an agent or automation workflow tomorrow. Unified Access provides a single source of truth, so access policies aren’t fragmented by where or how work happens. It also changes how credentials are delivered. Rather than distributing long-lived secrets and hoping they’re handled correctly, Unified Access can provide credentials to AI agents and machine identities at the moment they’re needed, evaluating access in context when it’s requested. As more work is delegated to agents, moving from “always-on” access to “just-in-time” access becomes critical.
As credentials move across humans, agents, and machines, audit trails fragment. Human authentication lives in one system, service accounts typically live in another, and agent activity can span both, which makes it hard to answer basic governance questions.
Unified Access brings credential access under one system of record, so security teams have a single place to see which credential was used, by whom or what, and when. That unified trail matters for incident response and for continuous governance as more work is delegated to agents.
Unified Access is launching with collaborations across the AI and developer ecosystem, so teams can secure agent-driven workflows in the tools they already use.
Foundation model providers: Anthropic and OpenAI are partnering with 1Password to enable the use of 1Password vault items in agentic browser-based flows and developer IDEs.
AI developer tools: Cursor, GitHub, and Vercel integrate with 1Password to secure developer workflows across IDEs, cloud sandboxes, and CI/CD pipelines, with hooks available for Cursor agents and GitHub Actions.
AI and cloud infrastructure: CoreWeave uses 1Password to discover, secure, and audit agentic workloads at the infrastructure level, and Commvault is partnering with 1Password to help organizations protect and manage access to critical data.
MCP gateways: Natoma and Runlayer integrate 1Password to securely inject credentials into the agent sessions they manage, simplifying workflows and reducing secrets sprawl.
AI browsers: Anchor Browser, Browserbase, KERNEL, and Perplexity integrate with 1Password so agent workflows can access secrets just-in-time, with least-privilege controls, and a clear audit trail of actions taken on a user’s behalf.
These collaborations reinforce the same point: as agents and automation become embedded in everyday workflows, credential security has to be built directly into the platforms where work happens.
AI is changing who can build, and how work gets done. That means credentials are moving faster, getting used in more places, and being exercised by more than just humans. Unified Access is built for that shift, with visibility at the edge, centralized control, runtime delivery, and unified audit across humans, agents, and machines.
Want to learn more about 1Password® Unified Access? Head here to get started.

Modern enterprises aren’t just adding employees; they’re adding subsidiaries, multiple teams, contractors, AI builders, temporary projects, and new SaaS tools every week.
And every new addition to a company’s ecosystem also brings more credentials to manage. Unfortunately, not all of those credentials can be managed by solutions like single-sign-on (SSO) or privileged access management (PAM). Many of them might be stored in shared spreadsheets, developer environments, browser sessions, and automation workflows that traditional identity security systems were never designed to govern.
This results in identity sprawl, operational drag, and an overall widening of the Access-Trust Gap. In the face of this ever-expanding attack surface, security leaders are left struggling to deploy credential security across every team and workflow, without having to build more infrastructure just to manage their infrastructure.
In light of these issues, today we’re introducing a new evolution for 1Password Enterprise Password Manager (EPM): enterprise-grade provisioning, structure, governance, and security automation built directly into the platform.
This launch includes:
Automated Provisioning hosted by 1Password
Enterprise multi-tenancy
Verified emails from 1Password
OAuth-based Users API and new Security Automation integrations
Together, these capabilities make EPM easier to deploy, easier to scale, and easier to operate as the foundational tool of modern identity security.
Automated Provisioning hosted by 1Password is our next-generation provisioning solution, built directly into 1Password. Automated Provisioning requires no servers to deploy, no bridge to maintain, and no ongoing infrastructure burden.
In early testing, the response from admins was immediate.
“We were done in about five minutes. We set everything up from scratch, added the integration in Okta, and it worked immediately. Adding and removing users was seamless. This is 100% a better experience than trying to set up the SCIM bridge on GCP. This is exactly what a best-in-class provisioning experience should look like: URL, token, test API, and SCIM is up and running. Thanks for making it so easy.”
By hosting provisioning inside 1Password’s secure infrastructure, powered by confidential computing, we removed the operational tax that slows teams down without compromising our zero-knowledge security model.
Most provisioning solutions start from the same assumption: the service managing users can also see the data it manages.
That assumption does not work for 1Password.
From day one, 1Password has operated on a zero-knowledge, end-to-end encrypted architecture. At no point can anyone at 1Password see customers' encryption keys or vaults. Even our own infrastructure cannot read your data. That privacy model is core to 1Password, but it also makes automation significantly more difficult. For years, that tradeoff forced a choice: teams could have automation, or they could have zero-knowledge security, but they couldn’t have both without adding significant complexity by running their own infrastructure.
Automated Provisioning hosted by 1Password completely changes that dichotomy. Instead of asking customers to trust a hosted service with sensitive cryptographic operations, we designed provisioning to run inside of an isolated secure enclave. Encryption keys are generated, used, and protected inside that enclave, meaning that they stay isolated not just from 1Password operators, but even from the underlying cloud provider.
In practical terms, that means:
1Password can automate user creation and access without ever seeing client secrets
Cryptographic operations are isolated, attested, and inaccessible to operators
Every provisioning action is recorded in a verifiable trust log that clients can independently validate
Rather than a hosted version of a SCIM integration, this is a fundamentally different approach to automation in a zero-knowledge system.
The result is something rare in identity infrastructure: automation that scales without compromising privacy or trust.
Still, automating users is table stakes. The next challenge is organizing access at enterprise scale, and it’s significantly harder.
As companies grow, many start with a single 1Password account, which they quickly outgrow. Over time, different teams need different policies. For instance, acquisitions often require some degree of autonomy without losing oversight.
1Password Enterprise now supports multi-tenancy, enabling parent and child account structures with:
Delegated administration
Consistent policy enforcement
Centralized visibility
And more
This new way of structuring 1Password gets away from the “one size fits all” model and allows you to create a more personalized structure that maps to the way your team actually operates.
Automated Provisioning support for multi-tenant environments is coming soon after launch for teams that want to add automated assignment workflows.
Trust at the moment of access matters, especially when identity decisions are happening in real time. That’s why emails sent from 1Password now display our verified logo and authentication indicator across supported inboxes, including Gmail, Apple Mail, Yahoo (and more).
By meeting the strict verification requirements reserved for highly trusted senders, every message from 1password.com, 1password.ca, 1password.eu, and agilebits.com now carries built-in proof of authenticity that attackers can’t replicate. For customers onboarding users, verifying accounts, or recovering access, this removes hesitation at a critical moment. It reduces false phishing reports, accelerates self-service flows, and reinforces 1Password as a trusted foundation for identity security — not just in the browser or vault, but everywhere our customers interact with us.
Modern security teams increasingly rely on integrated security operations center (SOC) workflows that correlate signals and alerts, while orchestrating detection responses in real time. Behind every alert is an identity: a person, a service account, an API key, or an AI agent. However, when remediation requires manual steps, investigation and response slows, increasing security risks.
We’re launching the Users API for Partners (in public preview), our first API using OAuth 2.0-based authentication designed for secure, enterprise-grade security. Enabling ecosystem partners to build integrations for 1Password Enterprise Password Manager and use delegated, scoped authorization to list users, suspend access when risk is detected, and restore access after remediation.
We’ve worked with strategic partners over the past few months to build new security automation integrations using the Users API. With these integrations, customers can use EPM events activity logs and SIEM insights, alongside security automations, to trigger automated SOC workflows to suspend or restore users in 1Password Enterprise Password Manager when risk is detected.
Joint customers of 1Password Enterprise Password Manager and CrowdStrike, as well as BlinkOps, Elastic, Sumo Logic, Tines, and Torq can configure their OAuth integration within the Integrations page of EPM starting today.
For customers, this helps SOC teams reduce exposure time and act on risk with greater speed and consistency. For partners, this enables joint value solutions built on OAuth designed for secure, enterprise-grade extensibility.
Automated Provisioning hosted by 1Password, multi-tenancy, and the Users API for Partners, were all made to serve the same goal; they make it easier than ever to deploy 1Password EPM wall to wall. Simplified provisioning, scalable structure, and predictable governance all help enterprises secure every team and workflow, while preserving the usability and zero-knowledge security model that 1Password is known for.
These capabilities mark another step toward what modern enterprises need most: security that scales with the business instead of slowing it down.

Modern security teams increasingly rely on integrated security operations center (SOC) workflows that correlate signals and alerts, while orchestrating detection responses in real time. Behind every alert is an identity: a person, a service account, an API key, or an AI agent. However, when remediation requires manual steps, investigation and response slows, increasing security risks. Organizations are also expected to maintain continuous compliance via clearly enforced access controls and auditable processes.
Today, 1Password is expanding the 1Password Enterprise Password Manager (EPM) through the public preview of the Users API for Partners, enabling security teams to respond to incidents faster during active security events. Powered by the Users API for Partners, security automation integrations with partners like CrowdStrike, in addition to BlinkOps, Elastic, Sumo Logic, Tines, and Torq enable mutual customers to automatically suspend or restore users in EPM when risk is detected. Together, these capabilities embed identity actions programmatically into coordinated SOC workflows.
The Users API for Partners, now in public preview, enables execution of user-related actions within 1Password EPM. The API uses OAuth 2.0-based authentication designed for secure, enterprise-grade security. This enables ecosystem partners to build integrations for 1Password Enterprise Password Manager and use delegated, scoped authorization to list users, suspend access when risk is detected, and restore access after remediation.
We’re also introducing api.1Password.com as the single access point to 1Password APIs, accessible to ecosystem partners looking to build solutions with 1Password. The Users API for Partners will be the first API available through this access point with additional APIs accessible via api.1Password.com in the future.
Partners can explore and build integrations for 1Password EPM through our public preview documentation and provide feedback for improvements before general availability. For partners, this enables joint value solutions built on OAuth designed for secure, enterprise-grade extensibility.
1Password Enterprise Password Manager customers can now enable security automation integrations with CrowdStrike, as well as BlinkOps, Elastic, Sumo Logic, Tines, and Torq.
With these integrations, customers can use EPM events activity logs and SIEM insights, alongside security automations, to trigger automated SOC workflows to suspend or restore users in 1Password Enterprise Password Manager when risk is detected.
Security teams can now:
Reduce manual intervention by embedding identity actions programmatically into coordinated security processes, accelerating containment during high-risk incidents.
Orchestrate enforcement of access policies in alignment with organizational and regulatory requirements.
Maintain clear audit trails of what actions were taken, when they occurred, and which events triggered them.
Joint customers can configure their OAuth application within the Integrations page in EPM starting today to connect one of our new security automation integrations. For customers, this helps SOC teams reduce exposure time and act on risk with greater speed and consistency.
In addition to enabling SOC teams to execute programmatic users actions within automated workflows, we are strengthening the operational foundation of 1Password Enterprise Password Manager.
Automated Provisioning, hosted by 1Password, is our next-generation provisioning solution built directly into 1Password. It requires no servers, no SCIM bridge, and no customer-managed infrastructure. Running within 1Password’s secure infrastructure using confidential computing, it preserves our zero-knowledge architecture while enabling automated user creation and access management.
Enterprise multi-tenancy will support parent and child account structures with delegated administration, centralized visibility, and consistent policy enforcement. Organizations can scale across business units or acquisitions while maintaining oversight.
Together, these enhancements enable workflows at enterprise-scale, through automation and governance, for 1Password Enterprise Password Manager customers.

Women’s History Month is a time to recognize the women who are not only advancing their fields, but reshaping what leadership means within them. The theme guiding this year’s Women at 1Password Employee Resource Group (ERG), Leading the Change: Women Shaping a Sustainable Future, reflects that responsibility.
One of the women leading that change within 1Password is Nicole Scherbina, Senior Staff Manager of Product Operations and a leader within our Women at 1Password ERG. Her perspective on leadership, equity, and readiness reflects the kind of intentional impact we’re committed to building.
Take a few minutes to get to know Nicole and the journey that shaped her leadership.
Can you share a bit about your career journey and what led you into product operations? Was this a path you always envisioned?
My career has been shaped by a few themes, but the most relevant one is making order out of chaos. I’ve worked across very different environments and in organizations of different sizes and stages (food, non-profit, toys, you name it). Each one forced me to learn different operating models and challenged what I thought I knew. Having that diversity of experience is a superpower, and gives you more to pull from when you’re pattern matching in your next role. I’m a zero to one systems builder at heart. I’m most energized when something is undefined or messy and needs structure. I care about helping teams make better decisions and creating order where there isn’t any.
Product Operations became the right fit because it sits at the intersection of strategy and execution. It’s where you design the systems that translate ideas into outcomes. It wasn’t a path I mapped early on. It’s the result of a broad career that gave me perspective, pattern recognition, and a deep appreciation for how different teams operate. All of that now shows up in how I build and lead.
As Senior Staff Manager of Product Operations, what are the most critical problems you’re focused on solving at 1Password today? How does your work enable product teams to do their best work?
My focus is on the systems behind the execution; thinking about how what we’re doing ties back to our goals, and what’s in the way.
The most critical problems my team is solving are prioritization, cross-functional alignment, and operating effectively in ambiguity. Practically, this is about how effectively teams are operating when it comes to planning, communication, and visibility.
My team’s role is to reduce the chaos, surface trade-offs early, and provide visibility into risk and progress. We want product teams to spend more time solving meaningful customer problems and less time navigating processes or misalignment.
What’s a moment in your time at 1Password that you’re especially proud of?
The moment I’m most proud of is actually right now, and what I’m most proud of is my team and how they’ve risen to the occasion.
1Password is at an inflection point as AI reshapes how we build, prioritize, and deliver. Periods like this test an organization’s operating discipline and leadership depth.
My team has adapted exceptionally well to this environment. They’ve leaned into experimentation, tested tools in real workflows, shared what’s working and what isn’t, and brought others along. Seeing them operate with that level of ownership and judgment at a pivotal time makes this particular moment stand out.
You’ve built and developed a strong team that reflects different backgrounds and identities. What’s your philosophy when it comes to attracting, developing, and retaining diverse talent in product?
Diversity is not an initiative – it’s a leadership responsibility. Building a strong team starts with expanding your definition of what “great” looks like and challenging assumptions about experience and style. Excellence is not defined by a specific degree, school, or background. Not everyone has had the same access to opportunity, and strong leaders account for that rather than defaulting to what they know.
I focus on hiring for capability, curiosity, and mindset. From there, it’s important to set expectations, give and ask for direct feedback, and create a space where it’s safe to dissent. People stay where they see a path to growth, where their perspective is valued, and where they’re not penalized for sharing what they think.
Retention comes down to the environment a leader creates. Teams do their best work when they feel respected, challenged, and supported. That requires engaged, intentional leadership. Leaders have to work at and want to do it, because your people know when you don’t mean it.
You’re part of the leadership team for Women at 1Password ERG. What does advocacy look like to you in that role, and why is this work important to you?
Advocacy, to me, is both systemic and personal.
Systemically, it means ensuring women’s experiences and feedback are visible in leadership conversations and translated into action where needed. It also means showing women at all levels what success looks like, what other women are capable of, and building the road for others to follow behind.
Personally, it means mentorship, sponsorship, and creating spaces where difficult conversations can happen safely. It means equitable access to information and opportunities. It means addressing bias directly and making room for others to lead.
This work is important because culture is shaped by what leaders prioritize. If we want equity in outcomes, we have to be deliberate about equity in opportunity.
What’s something you wish more women internalized about their own potential or readiness for leadership?
If you wait until you feel ready, you’re likely already late. You may not ever feel ready, but you have to do it anyway, because growth happens at the edges of what you're comfortable with.
Many high-performing women wait until they meet every requirement before stepping forward, whether it’s for a new role or a promotion. Leadership often requires stepping into ambiguity before you feel prepared and guiding others through it.
I encourage women to trust their instincts, judgment, and resilience. Confidence comes through experience, and experience often starts with raising your hand before you feel entirely ready.
What does Women’s History Month mean to you personally, and what responsibility do we have as leaders today to shape the next generation of women in tech?
Women’s History Month is about both recognition and accountability. It’s a moment to acknowledge the progress that’s been made and to be honest about the gaps we still have.
As leaders, our responsibility is to make advancement less dependent on luck or proximity to power. That means building transparent systems, advocating in rooms where decisions are made, and modelling inclusive leadership consistently, not just when it’s convenient.
The next generation is watching how we lead. The standard we set today becomes what they inherit.
Women’s History Month is about honoring progress and taking responsibility for what we build next. If you want to join leaders like Nicole and help build the road ahead for the next generation, check out opportunities on our team.

Enterprise password managers (EPM) like 1Password, LastPass, Dashlane, and Bitwarden help you create, store, and fill strong passwords and credentials across different websites and apps, so you don’t have to remember or write them down. EPMs provide secure sharing, data encryption, and data breach prevention against phishing and malware, helping IT and Security teams protect and enforce policies around credentials.
While there are many EPM options, choosing the best password manager can be a challenge. Making a side-by-side comparison can be extremely helpful for understanding which is best suited to your organization’s cybersecurity strategy.
Today, the challenge is more than storing passwords securely. It’s also about reducing credential risk across everyday work, including shared logins, third-party access, and developer secrets that often sit outside of traditional SSO coverage.
In this post, we’ll compare two popular password managers: Bitwarden vs 1Password. We’ll compare both head-to-head on core, premium features that help organizations adopt credential security.
1Password | Bitwarden | |
Two-Factor Authentication | Included | Included |
Customer Support | Included | Not offered |
Travel Mode | Included | Not offered |
Guest accounts for EPM | Included | Not offered |
Two-Secret Key Derivation (2SKD) Security Model | Included | Not offered |
SIEM Integrations | Broad (CrowdStrike, Datadog, Splunk, Sentinel, Panther, Huntress, Sumo Logic, more) | Limited (Splunk, Panther, Elastic, Sentinel, Rapid7, Sumo Logic) |
Hosted Provisioning | Included | Not offered |
Developer Tools (CLI, VS Code Extension, SSH agent, Shell Plugins) | Included | Added cost, with some of these features |
Secrets Management | Included | Added cost |
Enterprise password managers store and encrypt users’ credentials to discourage insecure practices, such as password reuse, risky sharing, and vulnerable storage (e.g. spreadsheets, sticky notes, and browser-saved passwords). They include a password generator to help employees use unique, strong passwords where needed and enable secure sharing across teams and third parties.
But for IT and security teams, the challenge usually goes beyond password creation and storage. Shared and sensitive credentials are often created outside of SSO, where, over time, they create large governance problems, increase unmanaged access, and linger after employees leave or change roles.
While both 1Password and Bitwarden support the basics of credential security, there are differences between the two. For example, 1Password includes 20 guest accounts with every business plan, which allows securely sharing vault items with third parties like contractors, auditors, or temporary collaborators. This makes sharing login credentials and documents easier and much safer than sending them through email, text, or other methods.
Third-party access is rarely a one-time event. For contractors, external agencies, and temporary collaborators, access is often ongoing and needs to remain visible and controlled without slowing down work.
Bitwarden supports temporary password-sharing through its Bitwarden Send feature, but its not well suited for ongoing collaboration with third parties. Sharing all credentials saved in a specific Bitwarden vault is made difficult without free guest accounts. These are factors to consider when comparing password vaults.
Both 1Password and Bitwarden offer reporting capabilities that help you improve your password security. 1Password’sWatchtower is a security feature that includes alerts forweak, reused, or compromised passwords, domain breach monitoring, and exportable security reports or activity logs built for action.
Admins can also use 1Password to send account activity to their security information and event management (SIEM) system using the 1Password Events API. As a result, admins can get health reports on 1Password activity, such as sign-in attempts, item usage, and audit events, while managing all company applications from one central location.
Bitwarden makes reporting and exports painful and tedious, especially if you are self-hosting. Vault health checks and reports need to be manually run, unlike 1Password, which automates them and makes it easy to export for security audits. Compared to 1Password, Bitwarden also has limited SIEM integrations.
That can make a real difference for security teams that want credential-related activity to be part of broader monitoring, investigation, and compliance workflows rather than something they have to manage separately.
Bitwarden does not offer live chat or phone support, often redirecting users to fill out a request form to be connected by email. (Bitwarden paying customers are prioritized.) Meanwhile, all 1Password pricing plans offer 24/7 customer support – including phone support for Business plan subscribers. 1Password also includes a dedicated Customer Success Manager for organizations with 101+ users on a Business plan.
Both 1Password and Bitwarden are available across a wide range of devices and platforms. They offer native desktop apps for Windows, macOS, and Linux, as well as mobile apps for iOS and Android. While their mobile and desktop support is similar, the two providers differ in their browser extension experience.
Each provider offers browser extensions for popular browsers, including Chrome, Firefox, Safari, Brave, and Edge. Both the 1Password and Bitwarden extensions let you autofill [including autofilling time-based one-time passwords (TOTPs)] and generate secure passwords directly from your browser.
Yet, 1Password’s browser extension is more robust and intuitive. Watchtower alerts you to password breaches and other issues on the websites you have saved in 1Password, right within the extension. 1Password also has a built-in phishing prevention feature that acts as a second pair of eyes, stopping users from sharing their passwords with scammers. That way, you can take action immediately. (Bitwarden also offers an equivalent Phishing Blocker.)
Both 1Password and Bitwarden use a zero-knowledge architecture. Thanks to an end-to-end AES-256 authenticated encryption model, vault data is encrypted on the user’s device and cannot be decrypted by either provider, ensuring maximum privacy.
The difference lies in 1Password’s Two-Secret Key Derivation (2SKD) model, which combines your account password with a randomly device-generated 128-bit Secret Key to unlock and decrypt your data. This additional security layer creates an encryption key that protects your vault, even if your account password is compromised, because the Secret Key is also required to decrypt the data.
Bitwarden relies solely on a master password. If that password is phished or stolen, the entire vault can be compromised because there is no additional protection layer, such as a Secret Key.
Bitwarden offers a separate, paid secrets manager, leading to fragmented workflows at an additional cost to the base plan price. On the other hand, 1Password includes secrets management in our core enterprise password manager product, supporting API tokens, SSH keys, infrastructure secrets, and passkeys.
With 1Password, secrets management is not disparate; rather, it’s treated as a key workflow that brings consistency and control to CI/CD, cloud, and infrastructure pipelines. The integrated SSH agent securely stores and syncs keys, eliminates the need for unencrypted local keys, and enables biometric authentication for Git and SSH, reducing friction for developers while improving security.
1Password Business supports automated provisioning integrated directly into the platform. Unlike bridge-based models that require maintaining separate infrastructure, Automated Provisioning hosted by 1Password requires no servers to deploy, no SCIM bridge to maintain, and no ongoing infrastructure burden. By hosting provisioning inside 1Password’s secure infrastructure, powered by confidential computing, 1Password removed the operational tax that slows teams down without compromising its zero-knowledge security model.
In comparison, Bitwarden’s self-hosted provisioning is complex, requiring customer-side connectors, patching, and maintenance overhead. Self-hosting can seem attractive for compliance, but it shifts the security and operational burden to your team. As a result, if infrastructure is not maintained to the same standards, it could increase exposure to risks.
Therefore, 1Password reduces operational overhead for IT teams managing user lifecycle workflows, helping organizations scale access management.
A time-based one-time password (TOTP) is a form of two-factor authentication that adds an extra layer of security to your logins. The Bitwarden Authenticator is a tool that lets you generate and enter time-based one-time passwords (TOTPs) for online accounts that support them. It also enables multi-factor authentication (MFA) across online accounts and applications.
1Passwords also supports TOTP and guides employees toward stronger authentication by identifying weak sign-in methods and promoting passkeys or MFA through Watchtower. Additionally, 1Password Device Trust secures your perimeter by proactively blocking authentication attempts from untrusted devices.
While Bitwarden may appeal to technical teams for its open-source architecture, 1Password Enterprise Password Manager is a strong choice for both growing and established organizations that want a secure, integrated, and scalable platform as part of their cybersecurity stack.
With its strong functionality and ease of use for both admins and employees, 1Password reduces operational overhead and helps you deploy faster, gain better insights, and cover everything needed to uncover or address vulnerabilities.
1Password is the single, secure place to protect every credential – passwords, passkeys, shared logins, API keys, and AI secrets – giving businesses visibility and control across the entire identity surface.
If you want to strengthen credential security across your workforce, please reach out to us.
Contact usTry 1Password free for 14 days and see how it can help your team secure access without slowing work down.
Try it free
SaaS contract renewals have a way of sneaking up on IT and Finance teams. One day, everything is running fine. The next, a renewal notice hits your inbox, usually with little context, limited time, and no clear answer to the most important questions: Who’s using this? Do we still need it? And are we paying for more than we should?
For many organizations, renewals are reactive events instead of strategic decisions. That’s how SaaS spend compounds.
The problem isn’t negotiation skills or vendor management. It’s that most teams don’t have the visibility they need into spend and usage when it matters most.
Renewals should be straightforward. In reality, they’re anything but. It’s almost never easy or straightforward to get answers to the questions asked as part of the renewal process. There are a few simple reasons why:
Usage data is fragmented or missing entirely. Finance has contracts and knows the total spend. IT knows some of the apps in use. But rarely does anyone have a complete picture of who is using an app, how often, and whether those licenses are actually needed.
Ownership is unclear. Apps are often purchased by individual teams with credit cards, inherited through M&A, or renewed through POs quietly year after year. When renewal time comes, it’s not always obvious who owns the renewal decision or who should be accountable for the cost.
Offboarding gaps inflate renewals. Licenses tied to former employees don’t disappear on their own. If licenses haven't been well managed or fully removed, former employees may still be counted and billed at renewal time.
Auto-renewals remove leverage. If you miss the notice window, it’s possible a contract could roll over at the same (or higher) rate. Without time to evaluate usage or alternatives, organizations may be stuck overpaying for unused or unnecessary SaaS apps.
Effective SaaS renewal management isn’t about squeezing vendors. It’s about making informed decisions based on whether tools are actually being used actively within your organization.
Renewals should be predictable, not surprising. IT and finance need to know what’s renewing, when it’s renewing, and have easy access to the required contracts, all well before deadlines hit.
Decisions must be based on real usage. Understanding real usage is critically important, especially if only half your licenses are actively used or if you have two tools that do the same job. This information needs to be available at renewal time, not after the invoice is paid.
Renewals are a team sport. Renewals sit at the intersection of IT, finance, and the business. The process should support collaboration and not rely on last-minute Slack messages and spreadsheets.
1Password SaaS Manager helps IT and Finance teams turn renewals from fire drills into planned, data-driven decisions.
See what’s actually being used by continuously discovering SaaS apps and tracking license usage. 1Password SaaS Manager provides a clear picture of how SaaS tools are actively used in your organization.
Connect usage to spend and shift renewal conversations from “Do we need this?” to “How many licenses do we actually need?” That clarity helps right-size contracts and avoid paying for shelfware.
Prevent renewal surprises by getting alerts on upcoming renewals 30-60-90 days out, giving IT and finance time to review usage, involve stakeholders in automated Slack, Teams, and email messages, and make informed decisions.
Reduce risk by proactively using renewals as a natural checkpoint to review access, reclaim licenses, and address unmanaged or risky apps.
SaaS renewals aren’t just financial events. They’re checkpoints for visibility, governance, and operational maturity for SaaS Management processes. When IT and Finance have the right data, renewals become opportunities: to reduce waste, lower risk, and simplify the SaaS landscape. When they don’t, renewals quietly lock in inefficiency for another year.
With the right visibility and automation, contract renewals stop being guesswork and start working for the business.
You can learn more about how IT and finance can collaborate to take control of SaaS spend in our upcoming webinar, or start getting control over your license usage today with a demo of SaaS Manager.

In Formula 1® preparation isn’t optional, it’s everything
Whether you’re traveling to the circuit or hosting a watch party, race weekend means bouncing between devices, signing in fast, and clicking links quickly. The last thing you want is to reset a login at lights out. So if you share the streaming link, watch telemetry tabs on a second device, or keep the group chat on track, this blog is for you.
This 10-minute security checklist is built to help you secure streaming accounts, travel info, and all your race-day devices so you can rev up to speed before the race starts.
Pre-race password inspection: 2 minutes
Pole position accounts: 3 minutes
Share smarter: 2 minutes
Multi-screen test: 2 minutes
Victory lap: 1 minute
Scammers count on the urgency you feel to rush to the starting line, because they know that mistakes happen when you’re in a hurry. In a 1Password survey of 2,000 American adults, 89% said they have encountered phishing, and 61% said they have been phished. The biggest factor among the people who got scammed? Emotional urgency, like the kind you feel when you’re desperately trying to watch the race on time. The best way to avoid it? Make sure your logins are running smoothly before race day.
One compromised login can turn into multiple account headaches right when you are trying to buy tickets, pull up a boarding pass, or get the stream running.
A reused password is digital drag. It slows everything down and increases the risk if a single account is exposed. For a quick win, use a password generator to create strong, unique passwords in seconds, then save it so you don’t have to remember it.
Most of us have more accounts than we can remember of and the riskiest accounts are the ones we lose track of.
Check your 1Password Watchtower alerts for weak passwords, reused logins, and passwords that were involved in known breaches to help get you to a safe starting line.
On top of that, our built-in phishing feature adds an extra layer of potential by warning you before you enter your password into a suspicious page.
These accounts reset the rest. If one breaks, you are stuck in reset loops when you should be checking in or pressing play. Start with the accounts that can reset everything else.
Prioritize:
Email, it resets everything
Travel apps, airline, and hotel logins
Banking and payment info
Ticketing profiles
Streaming accounts
To secure your essentials, update the key accounts with strong, unique passwords.
When accounts are shared, a lockout can become a domino effect of messages, prompts, and new sign-in attempts. Who changed the password, and why are five devices trying to sign in at once?
If you have ever shared a streaming login, you know what happens:
Someone signs in on a new device → Someone else gets prompted → Someone changes the password → Now no one can watch
The fix is to share smarter. Keep credentials out of texts, screenshots, and notes where they spread quickly and are hard to control.
1Password survey research found that 76% of Americans who have fallen victim to a shopping scam still reuse passwords across multiple accounts. That means one bad click can turn into a chain reaction across shared streaming or worse, domino into email, travel, and payment info.
Secure your secrets by moving shared logins out of chats and into one secure vault. Then share access with one simple link that even people without 1Password can use securely.
You need your credentials securely available everywhere you sign in. This prevents cross-device friction that leads to missed flight check-ins, hotel confirmations, and race starts.
1Password is OS-agnostic across Mac, Windows, iOS, and Android devices, so your essentials stay in sync whether you’re on your phone, laptop, or both.
Most suspected phishing shows up in the places you use most at the moment: texts (59%), emails (59%), and phone calls (49%). Those are the places where your ticket needs verification, and your account needs attention.
On race weekend, you might open the message on your phone and sign in on your laptop. Context switching is where people make mistakes, and research shows that divided attention makes phishing harder to spot.
Check your kit:
Phone signed in
Laptop signed in
Email works
Tickets open
Stream plays
Travel app ready
If it is not smooth now, it won’t be smooth when the race starts.
Password resets, shared account lockouts, and second-screen issues never happen at a good time. 1Password helps you keep strong, unique passwords ready across devices, share access without spreading passwords in chats, and avoid phishing mistakes with warnings on mismatched URLs.
1Password is the Official Cybersecurity Partner of Oracle Red Bull Racing. The payoff is simple: smoother sign-ins across devices and fewer moments where security gets in the way of the weekend.

AI and automation are embedded in daily work. Copilots draft content and pull in customer context. Agents triage tickets, update records, and trigger workflows across Slack, Salesforce, Jira, and GitHub. In engineering, this acceleration shows up in scripts, CI/CD pipelines, and infrastructure automation that depend on secrets to ship and operate software.
Many organizations rely on a mix of sign-in and privileged access controls to standardize logins and secure connected apps. But these systems stop at what can be federated and do not govern the long tail of SaaS apps, shared accounts, or credentials created in automation and AI workflows. Business-led IT makes this unavoidable. Teams adopt tools quickly, often outside centralized reviews or identity provider integration.
Agentic AI compounds the gap. Developers and AI builders generate API keys, tokens, service accounts, and agent secrets. Browser-based agents still use usernames and passwords. Credentials spread into browsers, spreadsheets, scripts, pipelines, and prompts, beyond the reach of traditional identity systems.
That is credential sprawl. It is a business risk that IT and security own, even when the credentials originate outside their systems.
It’s a mistake to assume that securing sign-ins also secures credentials. IAM, SSO, and PAM govern sign-in and privileged pathways. But modern work also runs on shared logins and nonhuman credentials, such as tokens, service accounts, and secrets created and stored outside the identity provider, in the workflows where work happens.
These gaps often become visible only during an audit or incident. At that point, three questions determine whether access is governed or guessed.
What credentials exist
Who owns them
What can they access
If you cannot answer these questions consistently, your identity program is managing sign-ins, not access.
Teams pick and use tools quickly, often skipping central reviews. 1Password research found that 52% of employees have downloaded apps without IT approval.
This creates a shadow credential layer: access is created wherever work happens, such as in browsers, notes, SaaS admin consoles, text files, scripts, and AI prompts. When credentials are created faster than they can be governed, they are reused, shared, and left behind. This results in lingering access that is difficult to inventory, defend, or revoke confidently.
Attackers don’t need to break in if they can just sign in. Verizon’s 2025 Data Breach Investigations Report found that stolen or compromised credentials are the most common way attacks start. These breaches take the longest to identify and contain, nearly 10 months.
Credential sprawl increases credential-based risk in three key ways.
It expands the attack surface. As applications multiply and workflows integrate, access extends across human and nonhuman identities.
It creates visibility gaps. Credentials end up outside the identity provider, in places like browser passwords, spreadsheets, notes, scripts, and AI prompts. Over time, this leads to orphaned credentials with no clear owner.
It slows response when time is precious. Teams must track down scattered access, determine who owns it, and remove it without disrupting important work.
Without a clear strategy, credential sprawl spreads unmanaged. Teams create credentials quickly to keep work moving. Credentials persist because they work in a moment of need. Workforce change leads to drift as ownership shifts, roles change, and people leave, but automations remain. Traditional Joiner-Mover-Leaver processes are insufficient when credentials are created in browsers, scripts, and workflows.
A credential strategy is a system designed for how work really happens. Coverage, control, and lifecycle are what separate basic hygiene from real credential security.
Coverage means what you protect: passwords, passkeys, shared accounts, API tokens, SSH keys, service accounts, environment files, and AI agent secrets.
Control is about how credentials are managed: where they can be stored, how they’re shared, what rules apply, and how access is enforced where work actually happens, not just at sign-in.
Lifecycle covers how credentials change: creation, ownership, rotation, revocation, and proof, especially as roles change and automation continues.
A credential management strategy that lacks coverage, control, and lifecycle oversight doesn’t lower risk; it redistributes it.
Read more: Securing identities starts with 1Password.Securing user sign-ins isn’t enough if passwords, tokens, and secrets are still out of sight. The first step is to clearly know where credentials are, what they’re for, and who can access them. This way, you can answer who has access to what and why without a manual search.
Visibility is only the beginning.
Identity security should not slow innovation; it should make it safe. Organization-wide credential security makes that possible by creating consistent protection and a frictionless experience that people adopt across every person, tool, and workflow.
In a comprehensive model, administrators can manage every credential. Employees and developers get passwordless sign-in across devices. AI agents work securely. IT and security leaders can set standards that make autonomy safe across the business.
AI will continue to accelerate change. To support this progress without expanding the shadow credential layer, comprehensive credential security is essential. Every credential must be governed, every secret should have an owner, and every access path should be ready for audits and easy to revoke if needed.
That’s the world 1Password Enterprise Password Manager was made for.
Request a demo to learn more about securing identities with 1Password.
Request a demoALL RSS FEEDS
DISCLAIMER:
Microsoft’s ‘data sovereignty’ promise for Europe comes with an asterisk. Starting April 17, 2026, the company will start sending Copilot data to foreign servers for processing.
With the introduction of flex routing for Microsoft 365 Copilot, Large Language Model (LLM) inferencing — the step where your data is actually processed — may take place in the US, Canada, or Australia when European data center capacity runs short.
These changes are being applied by default. For new customer accounts created after March 25, 2026, flex routing is already on. For everyone else, it will be enabled automatically unless you opt out. (Instructions on how to do that below.)
If your business is based in the European Union or the European Free Trade Association (EFTA), this isn’t a small technical update. Flex routing changes whether your AI workflows stay within the EU or leave it without your knowledge. And it highlights what Big Tech’s version of digital sovereignty really means for Europe: They’re still in control.
Inferencing is the moment an AI model processes your prompt to generate a response, whether that’s summarizing a document, answering a question, or drafting content. By the time this happens, your data has already been assembled. Even if your data is stored in Europe, it may now be processed elsewhere — automatically, under a non-EU jurisdiction.
Microsoft makes it clear that data will remain encrypted in transit and at rest. That might reassure some customers. But if you’re operating under frameworks like the General Data Protection Regulation (GDPR), the Network and Information Security Directive (NIS2), or the Digital Operational Resilience Act (DORA), protecting data in storage and transmission isn’t enough.
Processing (or inference) is where exposure can occur. And under flex routing, that point can now move.
For an AI model to perform inference, data must be made accessible for computation. Your prompts, emails, files, and metadata are gathered and sent to the model. With flex routing, that package can be processed outside the EU.
Where your data is processed matters — even if it’s encrypted on the way in and out.
Microsoft’s decision to make flex routing a default feature is a red flag. Research shows that most people don’t bother to check their defaults or update them. If data sovereignty was something the company cared about for its European customers, it would not have implemented flex routing automatically.
It also puts your compliance department on notice that vendors may suddenly decide to change an important policy. You are now responsible for monitoring vendor updates, interpreting their implications, and adjusting settings to remain compliant. This may seem unfair if you selected a US-based vendor under the impression your data sovereignty was important to them.
If your vendors are based in the US, you’re relying on systems built for a different regulatory reality — one you don’t control, but still have to answer to.
Software updates, support, legal policies, and pricing decisions are made in Silicon Valley or Seattle. The rules your vendor follows are set in Washington. But your business is held to European standards.
That’s why more companies are starting to look at European alternatives to Big Tech. When your infrastructure, policies, and legal framework are aligned with the region you operate in, data sovereignty becomes enforceable, not conditional.
Europe has found itself in a difficult and dangerous situation.
Last August, Proton’s Europe tech sovereignty report revealed that over 74% of publicly listed European companies depend on US infrastructure for their basic tech services. Whether sending emails or running critical infrastructure in the cloud, Europe places its digital destiny in the hands of a few American service providers and the government they answer to.
That report now seems prescient. Over the last few months, rifts in the North Atlantic alliance emerged over tariffs and territory, culminating in a recent threat from Washington to break apart NATO itself.
As Proton CEO Andy Yen said at a recent tech conference in France, “If Trump wants to take Greenland, he doesn’t have to use force. All he has to say is, ‘Tomorrow Google, Apple, Microsoft, and Amazon will stop working in your country if you don’t sign a contract and give me Greenland.’ And if that happens, they will sign within the hour.”
Europe’s digital sovereignty seemed irrelevant as long as the post-war order held. Now that those foundations are shaking, governments are switching over to technology and cloud services they can control. The French government is reducing its use of Microsoft Windows, and other European countries are taking similar steps. Our recent survey found that European consumers support these moves. Nearly three-fourths of them told us in a survey that their society was far too dependent on the United States for technology.
But what does this mean for business leaders?
The problem of dependency isn’t just political. When your core systems rely on foreign providers, your critical systems — email, files, infrastructure — can be disrupted by economic and political decisions far away.
That’s why we urge business leaders to treat their tech stack not as a cost, but as an investment in control, resilience, and long-term independence. Retooling your company is as much a practical challenge as it is a mindset shift.
Here are three questions to ask yourself:
Corporate managers face a strategic decision about their internal tools.
Big Tech platforms offer convenience: They’re familiar, widely adopted, and easy to justify as the safest choice. “Nobody gets fired for buying IBM,” as the saying goes. But technology isn’t a commodity. Your tech stack shapes how your business operates, who controls your data, and how resilient you are when circumstances change.
Take for instance: In the late 2000s, the Chinese government realized it was too dependent on foreign oil. So it began to invest in the creation of a new domestic electric vehicle industry. Nearly two decades later, Chinese carmakers produce about two out of every three electric vehicles sold globally.
If Chinese decision makers had viewed automobiles as a cost, they would have purchased reliable gas-fueled cars from Japan or Detroit. Instead, they decided automotive tech was an investment. It paid off in the form of a powerful homegrown industry for China and affordable high-quality cars for everyone.
Your tech procurement decisions deserve deeper reflection and long-term thinking. When weighing your options, it’s worth asking:
Businesses that take these questions seriously are already turning security into a competitive advantage. Our 2026 SMB Cybersecurity Report found that using secure tech was a competitive advantage for 66% of businesses. And the price you pay for those services may not be so different; indeed, it might even be cheaper to buy local.
First there was greenwashing. Then there was privacy washing. Now there’s digital sovereignty washing.
US tech companies know digital sovereignty is important to European businesses. That’s why Google and Microsoft both promote a “Sovereign Cloud” and a European “data boundary” that evokes the idea of local control. “Discover a sovereign cloud without compromise,” Microsoft says.
It’s dangerous marketing because it’s not quite true. And the only thing worse than bad security is a false sense of security.
You don’t gain digital sovereignty just by choosing tech that processes and stores your data locally. You earn it through control — over access, usage, and the laws that ultimately apply to your data. The reality is very different from the marketing spin.
Here are five clues to tell the difference:
In the worst case, US tech companies could abandon the idea of data boundaries altogether. In April 2026, Microsoft moved precisely in that direction when it announced “flex routing” would be turned on by default for European customers, enabling offshore data processing.
If your data boundary can be punctured so easily, it’s sovereignty washing.
Europe has just woken up to the problem of US tech dependence. But that’s not because it’s a new problem. American tech companies have dominated the global business market since the beginning of cloud computing. Until now that has left European industry at a disadvantage.
But over the past 10 years, this has started to change, especially when it comes to enterprise software. From cloud computing to network security, identity management to AI chat assistants, European providers are reaching feature parity with global competitors.
In some cases, these providers depend on US infrastructure, but not always. For example, Proton’s Lumo AI runs open source models on European servers under European legal jurisdiction. That means your data stays under European control, not just physically, but legally and economically. Ironically, thanks to the GDPR and a privacy-first encryption architecture, Americans can gain more control and data privacy by outsourcing the tech stack to Europe.
By choosing European alternatives and promoting homegrown tech, you’re investing in how much control your business has over its future. The next wave of entrepreneurs and developers might not flock to Silicon Valley and instead choose Paris, Munich, or Geneva. It becomes a virtuous cycle that stimulates European demand for its own products.
That’s how this shift happens: not through a top-down policy, but through a multitude of individual choices by businesses like yours.
With the launch of Proton Sheets in late 2025 and, more recently, Proton Workspace, Proton Drive took an important step toward becoming a more complete privacy-first workspace for the files, photos, documents, and spreadsheets people use every day.
So far in 2026, we’ve focused on making that experience faster, smoother, and more practical across file storage, sharing, and collaboration. Here’s a look at the changes:
Some of the most meaningful improvements to Drive this year are about speed and reliability. Uploads and downloads are now noticeably faster, whether you’re saving a file, browsing photos, downloading from a shared folder, or collaborating with others. Early this year, Proton Drive delivered up to 60% faster uploads on iOS, as well as up to 30% faster uploads and 70% faster downloads on web. Shared workflows have improved too: Downloading shared files is now 70% faster, and uploading to a shared folder is 30% faster, even for people without a Proton account.
Behind these improvements is a stronger foundation: We’ve been rebuilding some of Drive’s most performance-intensive code into a shared SDK that powers core file operations across our apps. That means improvements can roll out more consistently across platforms, helping make Drive feel faster, smoother, and more dependable overall.
Some of the most meaningful user experience improvements are the ones that remove friction from everyday tasks, especially on mobile, wherever you are. We made a number of updates across the Drive mobile apps to help people get things done faster and with fewer steps:

Spreadsheets often end up holding far more of our lives than we expect. In our survey, we found that over 70% of people use spreadsheets for budgeting and personal finance. But these files often outlive their original purpose: 67% of US respondents said they still have access to spreadsheets or shared documents they no longer need. Many are also unsure what happens to their data behind the scenes, from ad targeting and content scanning to AI training and data sharing.
That is exactly why Proton Sheets matters, and why we have kept improving our end-to-end encrypted spreadsheet editor.

Among the changes, we’ve added and improved support for workflows people expect from modern spreadsheets, including:
These updates help make Sheets more practical for everyday use while preserving what matters most: Your spreadsheet data remains protected by Proton’s privacy-first design.
So far in 2026, our focus has been clear: making Proton Drive faster and more reliable across all platforms for the moments that matter every day. Many of these improvements came directly from listening to our community. We’re grateful for that feedback, and for the people who keep pushing us to make Proton Drive better.
At the same time, we haven’t lost sight of our broader priorities for 2026, including the long-awaited Linux client. We’ll keep building on this foundation throughout the rest of the year.
Thank you for supporting our mission to give people everywhere a secure, private way to store and share files without their data being exploited for profit. Tell us what you would like to see next on UserVoice.
As governments across the world charge ahead with age-verification laws, a well-intentioned rush to protect children is actually putting them at risk.
The goal is to shield children from harmful materials, but these laws lack sufficient safeguards to protect privacy. All it takes is a single data breach, and a law intended to protect children could end up exposing their sensitive personal information to the world.
To be sure, children deserve an internet that they can navigate safely. But explicit content and predatory social media are not the only dangers online. Privacy violations, especially for the young, can also do serious harm. Especially since, as the old warning goes, “The internet is forever.”
We should not accept simply trading one risk for another.
To verify their ages online, users are often asked to submit government IDs, credit card numbers, selfies, or unique biometric information. When breaches happen — and they do, with depressing regularity — that sensitive data is exposed.
What’s more, many companies outsource their age-verification services to a handful of third-party vendors. Those suppliers, as storehouses of the data, become all-too-tempting targets for hackers and criminals. Without sufficient policies on data minimization, usage, storage, and privacy, user data remains deeply vulnerable.
In September, a cyberattack compromised a third-party vendor for Discord, a video game chat platform, granting the attacker access to at least 70,000 images of government-issued IDs, including passports and licenses.
Discord had been collecting photos of IDs in compliance with the UK’s age-verification law, which took effect in July.
Since the implementation of the law, the UK’s Office of Communications reported that “many records were not consistent” with record-keeping and review guidance. Many companies also failed to show how they were taking responsibility for online safety risks.
This breach highlights the real-life consequences of online attacks. As age-verification laws gain traction on a larger scale, an emphasis should be placed on privacy. Protecting sensitive personal information makes the internet a safer place for everyone, including children.
The rush to prioritize age checks for minors without prioritizing secure methods of verification create additional cybersecurity risks that can put children in harm’s way. As governments make premature decisions about these technologies, they are opening a Pandora’s box for hackers and cybercriminals to mine at their leisure.
Moving forward, governments and legislatures must be thoughtful about the technologies they employ and the risks they come with. Policymakers should prioritize decentralized solutions that protect minors against the real threat of cyberattacks, without compromising users’ anonymity and right to privacy.
With age-check systems, there isn’t a one-size-fits-all solution.
Research suggests that no single method effectively protects children while also balancing concerns about privacy and access to information, but there is a way forward. Applying a broad array of common-sense measures, including parental controls and digital literacy education, can go a long way in helping guard children against potentially harmful content while remaining mindful of privacy rights and the nuanced ways young people use the internet.
It’s not exactly an alternative to age verification, but proponents of attribute-based verification argue that it provides a more secure and private method of verifying a user’s age. That’s because it verifies only what’s necessary, such as requiring a self-declared age range rather than a government ID. But it has its limitations. Notably, any method that relies on self-declaration can be easily circumvented. It also fails to address the issue of personal data privacy, as it does not prevent websites from collecting additional information, such as users’ IP addresses.
Attribute-based age checks, however, store data on the user’s device. This limits the number of people with access to a user’s private data and reduces the cyberattack risks posed by other age-check methods.
Like attribute-based verification, a zero knowledge proof (ZKP) provides a way for websites and apps to verify a user’s age without the user having to explicitly share personal data about their identity. But ZKP isn’t an alternative to age verification, rather, it’s a cryptographic tool that allows websites and apps to verify information about the user in question without gaining any additional information about the user.
In 2025, Google announced ZKP integration within Google Wallet to provide age verification across multiple apps. The tech company said it would continue to use ZKP with existing partners, like Bumble, to verify users’ ages without revealing their identities.
The Electronic Privacy Information Center’s model bill for Age-Appropriate Design Code (AACD) was designed as an alternative to the rise in age verification legislation. The AACD gives children agency over their online experiences while requiring tech companies to evaluate their programs for features that put children at risk for compulsive use.
Additionally, the AACD would prohibit these companies from implementing programs with high-risk features, and would provide transparency into addictive design practices.
Unlike age verification legislation, the AACD places responsibility on the manufacturers of these technological platforms, rather than the users they exploit, circumventing issues around privacy and personal security.
Parents and children can work together on a solution that best meets their needs. Device- and OS-level parental controls offer a more personalized approach to gatekeeping what kids see online.
Parents can set up their children’s devices to restrict or limit certain content. OS-level controls can be set up to limit daily screen time, require approval to install apps, and use web content filters, but the internet’s ever-changing nature means web filters can’t always keep up.
Used in conjunction with other protective measures, however, these restrictions can act as guardrails that reduce children’s exposure to harmful content without universal age verification.
Research suggests children who report less screen time are also the most likely to have parental controls on their devices. Yet parental controls are underutilized, according to the nonprofit Family Online Safety Institute.
Use of parental controls varies widely across device types, and they are hardly a perfect solution. Children may have access to more than one device, making time limits and content filters harder to enforce.
Talking with kids about online safety can make parental controls more effective.
In households that reported six or more conversations about online safety annually, both parents and children were more likely to say that parental controls effectively keep children safe online, research found.
And those offline lessons can be valuable tools in protecting children when they are online.
Research from the World Health Organization suggests educational programs and cyberbullying prevention can work to reduce violence against children online. Programs that discuss online dangers and offline violence prevention, as well as healthy relationship skills, can help address children’s vulnerabilities to sexual abuse, harassment, and bullying, a WHO study found.
Parental guidance, support, and the ability to engage critically with online content all affect how a child might feel about what they see on the internet, research suggests.
Protecting children doesn’t require turning the entire internet into an ID checkpoint. The widespread deployment of online age checks struggles to balance legitimate child protection concerns against users’ data privacy rights. Until that balance is struck, existing measures can help kids navigate the internet confidently without surrendering sensitive personal information at every turn.
The way small and medium businesses work has changed for good — but so has the way they get attacked. Teams are distributed, SaaS tools handle everything from payroll to project management, and contractors and vendors rotate in and out of systems regularly. With each new tool or employee with access, the number of potential entry points increases.
That expanding attack surface matters because credential-based attacks, including phishing, account takeovers, and password theft, have become one of the most common ways businesses get breached. They work precisely because access has sprawled, which makes it difficult to track. All an attacker has to do now is find one valid set of credentials to bypass your business’s defenses.
In this context, it should be encouraging that over half of small businesses now use a business password manager. But Proton’s SMB Cybersecurity Report 2026 — a global study of 3,000 SMB decision makers — found that one in four still experienced a breach last year.
All this points to a gap between how tools are adopted and how they’re actually used.
Most password managers are designed to do one thing well: help you remember your password. In practice, that means creating complex and unique passwords and managing them in an encrypted vault. That’s meaningfully better than the norm of reusing the same credentials across accounts and platforms.
But with passwords being an attacker’s easiest point of entry, SMBs need password managers to do much more than just solve memory and convenience problems. They need it to secure access.
Access is a far broader question. Do the right people have the right credentials — and would you know what they unlock or if they fell into the wrong hands? And as teams grow, subscriptions stack up, and contractors cycle in and out, your organization’s considerations need to shift from merely strengthening passwords to accounting for real-world security threats.
That’s the change most businesses don’t make until something goes wrong.
The key insight of our report was that businesses adopting password managers don’t consistently use them.
Unsafe credential sharing still persists at surprisingly high rates:
That’s a picture of busy people taking the fastest route available at that moment. Instead of toggling over to the password manager app and sharing a new credential in its proper vault, they might paste it into Slack or an email.
Workarounds feel harmless in isolation. But over time, credentials end up scattered across inboxes, chat histories, and shared documents in ways that are hard to untangle. When an employee leaves, you can’t later revoke access. And updating passwords on a moment’s notice after a data breach becomes impossible unless it’s stored in a centralized secure location.
Training to enforce security policies can help, but our research revealed even that isn’t quite enough…
Our report found that 39% of SMBs have experienced a security incident caused by human error. That statistic is easy to misread; the natural response is to assume that more careful employees means fewer incidents.
But this framing misses something important: Security systems that depend on perfect behavior under everyday pressure will always be let down by reality. Mistakes happen not because people don’t care — they happen because the secure option often demands more effort and time than the typical SMB can afford. Even well-intentioned teams will find workarounds when they’re resource-stretched.
The lasting fix isn’t more training. It’s designing systems where the secure option is also the easy one.
When sharing access safely takes no more effort than dropping a password into a chat message, people will use it.
The credential problem compounds as teams grow.
Eighty-six percent of SMBs now rely on cloud-based services for day-to-day operations. That typically means credentials sprawl across project management tools, finance platforms, marketing software, file storage, and customer systems, each with its own permissions and access history.
Access doesn’t just scatter across systems; it spreads across the organization, flowing between teams, external partners, contractors, and former employees who may still retain a way in.
This means that in reality, credentials accumulate, old access continues to linger, and the number of people who have — or have had — the keys to your most sensitive systems scales beyond easy tracking.
The SMBs that experienced breaches last year weren’t cutting corners: 92% were actively investing in security tools. They had password managers, encrypted email, training programs, and written policies in place. In other words, their setups looked solid on paper.
What many lacked was consistent enforcement. Multi-factor authentication (MFA) was switched on but not required, password managers were deployed but not embedded into daily habits, and onboarding and offboarding processes were handled informally rather than systematically. We suspect, given the popularity of browser password managers, that many were not even using a centralized team platform at all — instead relying on a patchwork of less-safe options on an individual basis.
Each of these is a small gap that stays invisible right up until it isn’t.
The real measure of a security setup isn’t what tools are on the list, but whether those tools hold up under the everyday pressure of how people actually work.
Here are some practices to help bring this reality closer for your business:
Want to know what else you could learn from our survey of 3,000 business leaders across six key markets? Read more in our SMB Cybersecurity Report 2026. You’ll learn what causes breaches and what they actually cost, where human error shows up most often, how cloud and AI adoption are creating new blind spots. It also includes practical steps for beefing up protection that hold up in real-world conditions.
In the 12 years since Proton began, millions of people have joined our mission to make the internet safer and more private, including over 100,000 businesses and nonprofits. They rely on Proton’s encrypted suite to protect their customers and teams, and we’ve continued to add more services and plans to support them — most recently with the launch of Proton Workspace.
Today we’re excited to announce the next addition to Proton Workspace with our secure appointment scheduling tool in Proton Calendar.
Whether you work with teams, run a side hustle, or take appointments from customers, you can now easily create public booking pages that show when you’re available, and your clients and colleagues can book an appointment in seconds. It automatically creates a new event on your calendar and generates a private Proton Meet link where you can have a secure video call. New events are zero-access encrypted, so all the details stay between you and your contact. Not even we have access.
For people dependent on platforms like Calendly, this means you no longer have to pay an extra subscription or give away calendar data to third-party services where it can be leaked or spied on. It’s a perfect tool if your business is based on appointments or if you want to save time finding an available slot to meet with colleagues or friends.
And it’s available in our new Proton Workspace plan, which combines all our business productivity tools into a single plan for complete data protection.
The new appointment scheduling tool is fully integrated with Proton Calendar and Proton Meet to protect your business data end to end.
That’s important because your team calendar contains a trove of information about you and your business activities: your location, your priorities, and your contacts. It’s critical to keep that information protected from Big Tech platforms that could monetize or leak it, and from hackers who could use it against you for fraud or phishing attacks. Using a third-party booking platform spreads your information across the internet and increases your risk of a data breach, especially when those tools don’t use strong encryption.
Appointment scheduling bridges two fundamental business tools: Proton Calendar and the all-new Proton Meet for encrypted video calls. It’s not enough for a business to be able to plan and host a secure video conference — they also need to be able to schedule it. Our appointment scheduling tool fills this gap.
Appointment scheduling is simple to set up, and it’s available on all paid Proton Mail plans, Proton bundles, Meet Professional, and Proton Workspace. Teams with Workspace can create up to 25 booking pages to support multiple meeting purposes and durations.

In your Proton Calendar, just add a new booking page, give it a name, and specify the times you’re available. Your booking page will have a link that you can share publicly, such as on your website, email signature, or social media profile.
Whenever someone books a meeting with you, the event will instantly sync directly to your calendar (so it’s not possible to double book). And your contact will receive a confirmation email. If you’ve selected Proton Meet as the location for the meeting, the confirmation will include a secure meeting link.
The time, description, and participants on every event are zero-access encrypted, meaning it’s locked with your private encryption key and can’t be accessed by Proton or anyone else.
Learn more about how to use appointment scheduling
With appointment scheduling, Proton Calendar becomes more than just a way to track your schedule — it’s a way to grow your business or side project. If you’re a professional service provider, letting clients book meetings is a core part of your business model. But even if your business doesn’t run on external meetings, the appointment scheduling tool can help you save time or be more available to your team.
Appointment scheduling is perfect for:
Securing your meetings isn’t just about protecting your own business, it’s also about protecting the people you do business with. When it comes to fields like healthcare, data protection is even an ethical and legal obligation. Appointment scheduling in Proton Calendar helps you meet those obligations while signaling to your customers that your business takes security seriously.
Online age checks are intended to keep violent, sexually explicit or other age-inappropriate content away from children. But do they?
Under-age social media users are often able to circumvent age restrictions, especially at the account-creation stage, research shows. In other cases, age checks have blocked children from accessing content that was later determined to pose no risk.
When faced with obvious harms, the desire to “do something” is understandable. But we need a higher standard. When it comes to children, we need to do something that works. And age verification as it is currently practiced often falls short of that basic goal.
Most parents of adolescents in the United States worry about social media’s effects on mental health, among other issues, according to the US surgeon general. At the same time, parents are concerned about the scope of age checks. In a study by the nonpartisan Center for Democracy & Technology, parents and teenagers voiced concerns about the checks’ effectiveness, data privacy, and user agency.
At their core, age-verification systems aim to prevent young people from accessing harmful or adult-geared content, but many critics have warned that even well-intentioned policies could create risks to free speech and data privacy for all internet users, not just children.
What’s considered harmful depends on whom you ask. Industry regulations, state laws, and national policies can all dictate which content is deemed harmful to young people, but some language is more vague than others.
The United Kingdom’s Online Safety Act, for example, lays out categories of content that children must be shielded from online. They include:
In Australia, the move to ban social media accounts belonging to people younger than 16 more broadly cites concerns about screen time and mental health.
Whether these measures effectively shield young people from harm is debated.
Some researchers have warned that age checks could impede access to medically accurate sexual information and other educational content.
After the U.K. Online Safety Act took effect, the government noted “instances of over-moderation” in which children were blocked from viewing content that didn’t pose a risk.
Even with age-check systems in place, potentially harmful and age-inappropriate content remains accessible to kids. In some cases, childhood deaths have been linked to suicide- and self-harm-related content and risk-taking social media challenges, according to the surgeon general’s advisory.
The same advisory, however, noted that social media can be a source of positive community, connection, self-expression, and important information.
Age-gating access to those corners of the internet stands to disproportionately affect young people who rely on online communities for support and information.
Measures put in place to label content and guard children from age-inappropriate material have also been flawed.
In September, Disney agreed to pay $10 million to settle allegations by the Federal Trade Commission, which accused the company of failing to label its children’s videos on YouTube as “Made for Kids.”
Failing to correctly label the videos meant Disney collected children’s personal information when they watched the unlabeled content and autoplayed “Not Made for Kids” videos when they finished. Children also became targets of online advertisements geared toward older viewers.
Disney didn’t admit any wrongdoing as part of the settlement.
The effectiveness of age checks remains to be seen.
In the weeks after Australia’s policy took effect, social media companies revoked access to about 4.7 million accounts belonging to children.
Findings from a 2024 study suggest that the widespread global deployment of age verification has resulted in privacy-invasive or ineffective methods.
Research from the U.K.’s independent online safety regulator, the Office of Communications, pointed to some measurable changes in internet behavior, but it’s still too soon to evaluate effectiveness.
The number of visitors to pornography sites in the U.K. declined by one-third since the Online Safety Act took effect in July, the office noted in a December online safety report. The office is assessing how much the decline may have reduced children’s exposure to pornography.
“While it is too soon to assess the long-term impact of these changes, the widespread adoption of age checks means that children of all ages are now less likely to encounter pornography accidentally, which research has shown to be the way most children encounter porn,” the report said.
The office is expected to publish its initial data and analysis on children’s online experiences by May.
Governments around the world are adopting laws intended to protect young people online. Age verification has emerged as a shared policy response, but in practice it produces very different internets shaped by unique legal, technical, and social conditions.
These case studies show what happens after age-verification laws take effect, focusing on three distinct models: decentralized legal experimentation, direct regulatory enforcement, and platform duty-of-care obligations. Together, they demonstrate how a single policy idea evolves when it moves into the real world.
The US exemplifies how age verification can spread without a national law. State legislation, court challenges, and platform responses have collectively reshaped online access, creating diverse outcomes across the country.
Federal lawmakers tried long ago to age-gate adult content on the internet. The Child Online Protection Act, passed by Congress in 1998, required commercial websites hosting material deemed harmful to minors to restrict access, often through age-verification mechanisms. Courts blocked the law repeatedly on First Amendment grounds, and it was ultimately struck down after years of litigation. The rulings reinforced protections for lawful online speech, including concerns about overbroad restrictions and the impact on anonymous access, shaping how later policymakers approached age-verification proposals.
Beginning in 2022, states began introducing legislation requiring adult-content sites to verify age, with early efforts in Louisiana and Utah helping establish a template that other jurisdictions soon followed. Lawmakers framed these measures as child-protection policies inspired by international proposals.
In lieu of a centralized system, these laws typically made platforms responsible for preventing underage access. Sites could face civil penalties—including fines, private lawsuits, or court-ordered restrictions—if minors accessed restricted content without “reasonable” safeguards in place.
States rolled out age-verification requirements aimed primarily at porn sites and other explicit content.
Texas quickly became the bellwether legal test case. Challenges to Texas HB 1181 moved through federal courts and ultimately reached the US Supreme Court, where justices allowed the law to take effect in the midst of legal challenges. The decision signaled that state-level mandates could proceed without definitive resolution.
That opened the door for other states to advance similar laws alongside ongoing litigation. Because each state set different standards and timelines—and because legal language left a lot of room for interpretation—there was no uniform technical solution, leaving platforms to navigate a rapidly expanding patchwork of regulatory demands.
Rather than uniformly changing how age is treated and proven online, policy pressure changed the internet itself.
Compliance became a risk calculation for platforms, as they weighed verification costs, liability, and privacy issues. Some—ranging from adult-content sites to social media—chose to restrict or withdraw services in affected states. Access began to depend on geographic location, producing a fragmented online experience.
Proposals and laws have increasingly targeted app stores and other digital intermediaries, shifting responsibility from individual sites to infrastructure providers. This lets policymakers gauge whether age gating can work at the ecosystem level.
Americans are sharply divided. Supporters argue that state laws finally imposed accountability on large platforms after years of failed federal legislation, reflecting a growing view among policymakers that voluntary safeguards are not enough to protect minors online. Critics, including civil-liberties organizations and digital-rights advocates, warn that mandatory age verification chills lawful speech and weakens protections for anonymous expression.
Litigation is the central arena for resolving these tensions, and state attorneys general are the front-line enforcers. As challenges move through the courts, judges continue to grapple with whether mandates constitute permissible regulation or unconstitutional restriction.
As a result, America’s internet is an experiment moving further from legal clarity, even as age verification spreads.
Focus: Legal viability
Outcome: Policy is shaped by litigation outcomes
After decades of global debate over online safety for minors, the UK became the first country to enforce modern age assurance on a national scale.
Early UK media regulation, particularly the Communications Act 2003, established content protections for minors in broadcast and on-demand services, but it didn’t address open internet access to pornography.
Under the Digital Economy Act 2017, the original plan was to mandate age checks for access to adult content, requiring age-verification technology specifically. That plan was repeatedly delayed and finally abandoned in 2019 amid privacy concerns and the practical challenges of enforcing rules against services operating outside the UK.
Instead of prescribing how content is gated, the Online Safety Act 2023 regulates outcomes, requiring services to deploy “highly effective” age-assurance measures and demonstrate how effectively they protect minors.
This created a broader safety framework, enforcing platform responsibility through performance standards that extend beyond sites offering adult content.
Implementation fell to UK communications regulator Ofcom. It outlined expectations for platforms, requiring age-assurance systems capable of reliably distinguishing adults from minors, with enforcement backed by investigation and financial penalties.
Ofcom didn’t specify a method. Companies could use identity-document checks, biometric estimation, third-party verification vendors, or alternative approaches—provided they met Ofcom’s effectiveness thresholds. This flexibility led to a rapid, albeit uneven, rollout of age verification.
The UK’s internet transitioned from an open-access model moderated after the fact to one requiring proof of eligibility to enter certain spaces.
When enforcement timelines arrived in 2025, major platforms began modifying access flows, and users began encountering checkpoints where none had existed before. These age checks were embedded in account creation, browsing activity, and content discovery, and that affected anonymity, friction, and participation online.
For platforms, age assurance became a continuous compliance obligation subject to interpretation, audit, and penalty; and it proved hard to define. Ofcom opened investigations into dozens of porn sites and issued penalties against operators whose age-assurance measures didn’t meet the standard. In this way, acceptable gates evolved through strict enforcement actions.
Public response has been mixed as to whether the system represents overdue protection or risky overreach.
Among the concerns raised by privacy advocates are assertions that mandatory age-assurance normalizes identity checks for lawful activity, expands collection of sensitive data, and threatens anonymity for users who rely on it for freedom to explore and express themselves.
Spikes in VPN use have been reported, suggesting that some UK users prefer workarounds to participation in verification systems. Others question the effectiveness of age gates, including some young users who’ve argued that they limit access without resolving underlying harms. Still others say critics should give these protections time to prove out, framing the law as a necessary adaptation to a changed digital environment.
The UK’s experience shows how age-verification policy alters the internet through cumulative shifts in access, accountability, and user behavior—changes that remain contested.
Focus: Access control
Outcome: Users must demonstrate eligibility to enter restricted spaces
Australia has drawn international attention for its online youth-safety agenda, where age checks emerge from platform duty-of-care obligations instead of a standalone age-verification law.
Australia’s Online Safety Act 2021 built on earlier regulatory frameworks (1992, 2015, and 2018) that relied largely on complaint-based takedowns of harmful content. Policymakers concluded that reactive removals were insufficient and shifted toward requiring large platforms to reduce risks up front.
The Act significantly expanded the authority of the e Safety Commissioner, turning the regulator from a complaint handler into a proactive supervisor of online safety. Rather than prescribing specific verification methods, the law made platforms responsible for preventing foreseeable harms to minors.
This shift laid the groundwork for age assurance by binding platform compliance to the ability to distinguish between adult and underage users.
Implementation centered on regulatory guidance and enforcement powers exercised by the eSafety Commissioner. Platforms were required to show how their services reduced risks to underage users, guided by regulator-approved safety standards and ongoing oversight.
In practice, this meant strengthening moderation systems, activating parental controls, restricting features for younger users and developing mechanisms capable of identifying them. So platforms deployed age-assurance measures such as age estimation, behavioral-detection systems, and layered verification approaches combining multiple signals to assess a user’s age, often trialed through government-supported technology testing programs. Age assurance therefore functioned less as a single checkpoint and more as an ongoing compliance capability embedded in everyday service operation.
In December 2025, Australia extended this duty-of-care strategy through a world-first social media ban for users under 16, explicitly conditioning access to major platforms on the ability to determine a user’s age.
For platforms, safety obligations became continuous and adaptive. Meeting regulatory expectations increasingly required systems capable of reliably distinguishing minors from adults, turning age assurance from an optional safeguard into a prerequisite for enforcing youth-access restrictions.
For users, changes ranged from stricter defaults and safety features to large-scale deactivation of accounts identified as belonging to underage users.
The result was deeper regulatory influence without universal identity-based age verification, reflecting a research-driven model that evaluates safety outcomes and emerging age-assurance tools instead of defaulting to biometric or document-based checks.
Australia’s approach has generated praise and concern, both inside and outside the country.
Proponents argue that platform design shapes online risk more than individual behavior alone, and that regulating platforms offers governments a more practical point of intervention. Critics believe that expanding safety mandates fails to adequately protect children and offers a quick fix to complex social and political problems.
As debate intensifies over whether enforcement will ultimately require more invasive age checks, this case shows that when governments regulate platform responsibility first, age verification can be a practical consequence.
Focus: System design and ongoing oversight
Outcome: Platforms must demonstrate their environments are safe for minors
The days of the checkbox honor system are ending as efforts to age-gate the internet spread worldwide. The goal of protecting children is widely embraced: Age should be checked for access to certain content or sometimes entire platforms, as young people are exposed to legitimate risks when left to explore and engage without guardrails.
But the methods of checking age—both the existing ways and those forming under intense regulatory pressure—vary significantly in effectiveness and intrusiveness. From one approach to the next, there are stark differences in how much data is collected and who controls it. Regardless of the method, the most consequential moment is the point where age is actually checked. The mechanics of that interaction, and how its outcomes are handled, drive real-world implications for privacy, security, and free expression.
Yet the distinctions are often blurred, stemming from the terminology around age checks. Age gating, age assurance, age estimation, and age verification can get collapsed into a single idea. Understanding why that matters starts with breaking down the language.
Age gating and age assurance are standards—policy goals that describe intent and confidence, not mechanism. Age gating tells you that an age-based restriction exists. Age assurance signals that some effort is being made to enforce that restriction. These terms don’t specify how, or how effectively, age is determined.
Age estimation and age verification are methods — technical categories for how age is checked. And the contrast is central to the debate over how age checks should happen online.
As lawmakers, courts, tech companies, and advocacy groups address both the complexities and conflicts of age gating, the terms “age estimation” and “age verification” are sometimes treated as interchangeable. That shorthand obscures meaningful differences in accuracy, accountability, and data exposure.
Age estimation, also known as age assurance, is exactly what it sounds like—an inference, not a confirmation. These systems draw on data already available within a platform, such as profile photos, videos, audio, declared information (like a birth date), and account metadata (like how long an account has existed). Using biometric techniques like voice or facial analysis, combined with account history and behavioral patterns, the system generates a probability that someone falls within a given age range.
Because this doesn’t require identity documents, age estimation is often framed as “privacy-preserving.” But data exposure depends on the individual system: Is age estimated once or continually? What signals are used? How secure is the system itself? And if age is misread, what happens?
Inference-based systems are inexact and can be fooled, such that a user’s age may be misclassified in either direction, with access allowed or denied where it shouldn’t be. On the gaming platform Roblox, which rolled out mandatory age checks for access to certain features, young users tricked the system with fake mustaches and other disguises, underscoring the risk of relying on inference alone.
Other concerns have been raised about accuracy and bias, as results depend heavily on image quality, vary from algorithm to algorithm, and are affected by unique intersections of personal attributes, with disproportionate misreads on under-represented groups. Data from Australia’s age-assurance technology trial—tied to a nationwide ban on social media for teenagers—showed that age estimation produced higher error rates for people with darker skin tones and for some demographic groups, including those from Indigenous and Southeast Asian backgrounds.
If eligible users are denied, recourse is limited. They generally aren’t told why, and the default solution is to upload identity documents—the exact thing age estimation is meant to avoid.
Age verification aims to confirm age as a fact, using proof from a trusted source. Today, that usually means a government-issued ID like a driver’s license or passport, either uploaded directly to a platform or filtered through a third-party service that verifies age and sends back a yes-or-no result.
The risk of document uploads is intuitive: Scans can be stolen or misused, particularly as age checks spread across more services. What’s easier to miss is that even when documents are deleted from a platform, the outcome of the age check often persists—stored alongside an account or session and linking back to an identifiable user.
Age-verification systems fall into two categories: those that bind age checks to identity and those that try not to.
Identity-linked systems are the dominant model today, employing the familiar ID upload flow. Platforms may not retain copies of documents, but the verification outcome is almost always stored, linking lawful content access to a real person who may not want that association recorded.
Adult-content sites illustrate the conflict. In states where age-verification laws have been enacted, compliance has largely meant identity-linked checks, requiring users to upload IDs through third-party vendors. As a result, industry giant Pornhub pulled out of 23 states, pointing to privacy risks. The company has said it supports age verification “when it is done right,” advocating for device-level age checks rather than site-based age checks.
Similar dynamics appear in app-store ecosystems, with age verification prompted at download, signup, or the account level. When the outcome of that check is tied to an account, it stops being a one-time gate and becomes an attribute, shaping how the platform understands and manages the user. That can include:
Users typically aren’t told how long their verification status persists, where it is stored, or how it may be reused, leaving them with little ability to contest errors, revoke consent, or gauge long-term implications.
Other age-verification systems attempt to avoid or reduce identity linkage. These approaches rely on credentialed or token-based claims, both of which perform an age check once and then reuse the result to grant access later.
Credentialed claims: Verifiable digital credentials (VDCs) rely on identity checks already performed by trusted institutions (think DMVs and banks), allowing users to confirm age online with a digitally signed cryptographic proof—aka the issuer vouching for the age claim. Most VDCs employ selective disclosure, revealing only what’s necessary to meet an age threshold (e.g., confirming that someone is “over 18”), though more advanced zero-knowledge proofs aim to verify eligibility without sharing any personal data at all.
Both reduce exposure at the point of access. But the privacy and security benefits depend on who issues the credential and how it’s stored, as well as which platforms accept it inside the emerging digital-ID model (which carries its own impacts to privacy and access).
Token-based claims: Tokens are like hand stamps at a concert; short-lived, site-specific proofs that allow repeat access without rechecking age every time.They are typically issued after an initial verification and used internally by a platform to streamline access.While that reduces repeated data exposure within a single service, tokens don’t eliminate identity linkage at the point of issuance and offer users scant visibility into how access is remembered or reused. Users typically can’t examine, limit, or revoke these claims, which turn a one-time access decision into an ongoing state. Tokens are a platform optimization, not a rights-protecting feature.
Whatever the verification pathway, the highest risk sits at the point where age is checked—and system design and implementation make all the difference.
Laws define the obligation to keep young people safe online, but they are carried out by regulators, platforms, vendors, app stores, and OS providers that must interpret vague requirements under real operational pressure.
Whether a law calls for “effective age assurance” or “privacy-preserving age verification,” it rarely specifies exactly how the requirement should be met in terms of:
Such decisions are left to downstream authorities, which is why the same legal language can produce radically different outcomes. These authorities are simply optimizing for different things: Regulators are optimizing for governance, platforms for liability, vendors for marketability, and infrastructure providers for uniformity. Beyond these institutional priorities, the primary concern is not democratic legitimacy or proportionality, but defensibility to show that sufficient steps were taken to prevent underage access. In that environment, ambiguity is seen as risk, and risk is minimized through standardization and overcompliance—or through platforms pulling out of states where compliance raises both ideological and financial concerns.
Social network Bluesky chose to block access entirely in Mississippi rather than comply with a state law that would have forced it to verify age for all users and collect sensitive personal data. The platform said the requirements went beyond child safety goals and would “limit free speech and disproportionately harm smaller platforms.”
The most restrictive option becomes the baseline not because of public input or legislative intent, but because of operational risk management. The consequence is an abstraction of policy that narrows the practical scope of all users’ rights online.
Advocacy groups warn that age gating threatens a free and open internet. They argue that adults misclassified as minors can be blocked from lawful information. That users unwilling or unable to submit identity documents can be excluded entirely. That communities relying on anonymity for reasons of safety, stigma, or self-exploration may find that essential information and connection now come with conditions they can’t meet. And that exclusion of children from the internet that isn’t “necessary and proportionate” violates their fundamental rights.
While the spirit of these laws is child safety, industry analysts worry that the legal language could be applied to any site offering content with “adult themes,” whether that means information about sexual health, creative image boards, or social forums.
These concerns have crystallized into ongoing legal opposition to age gating at both state and federal levels, despite widespread agreement that the internet should be safer for young users.
Understanding what “age verification” actually means helps clarify the challenges of finding that balance.
DISCLAIMER:

The DuckDuckGo subscription is a four-in-one privacy service that gives you extra protection beyond what's available for free in our web browser, search engine, and private AI chat, Duck.ai. It includes our VPN to encrypt your Internet connection, access to more advanced private AI when you want it, Personal Information Removal to help combat identity theft and spam, and Identity Theft Restoration.
The original DuckDuckGo subscription is now called Plus. (If you’re a current subscriber, this is what you have!) It includes all four protections and costs $9.99 USD/month or $99.99 USD/year. Enhanced with more powerful AI tools, the new Pro plan is $19.99 USD/month or $199.99 USD/year. Subscriptions are available in the U.S., Canada, the E.U., and the U.K. See this help page for international pricing and feature availability.
On Duck.ai, anyone can chat privately with ChatGPT, Claude, and other popular AIs, whether you have a subscription or not. Text chat, voice chat, and image generation are free to use within daily limits. DuckDuckGo subscribers on the Plus plan can do more, with higher usage limits and access to smarter AI models with extended reasoning. But the Pro plan is even more powerful.
We designed Pro for people who use AI frequently throughout the day, or for more demanding tasks that require multi-step reasoning…or both! Subscribers to the Pro plan get three additional Duck.ai upgrades:
This new Pro plan gives you the freedom to dive deep and iterate back and forth for complicated tasks, whether you’re fine-tuning images, analyzing data, writing long-form content, or making an in-depth plan. Higher limits also mean you don’t have to pick and choose as much; you can use AI for a broad range of day-to-day tasks.
When you take advantage of the extended reasoning on GPT-5.2 or Claude Opus 4.6, you’re more likely to get considered, relevant, and well-structured answers to even very complex prompts. And thanks to the Pro plan’s higher usage limits, you’re less likely to be disrupted in the middle of a complicated job.
If you primarily use DuckDuckGo to search and browse, and you’re not interested in advanced AI chat or added protections…our free offerings may meet all your needs. If you want to expand your privacy protection with our VPN, or you’re getting more into AI productivity tools, consider Plus! Pro is most suited if you use AI for tasks that require deeper context and multi-step reasoning.

The specific AI models included in each plan are upgraded regularly; at the time of publication, the lineup is as follows:
Yes! As a subscriber, you can switch between the Plus and Pro plan at any time. In the DuckDuckGo browser, go to Settings > DuckDuckGo Subscription. Select View All Plans, pick the plan you'd like to switch to, and proceed to payment or confirm. In third-party browsers, start by navigating to Duck.ai. Just go to Settings & More > Manage Subscription and follow the same steps above.
Ready to give it a try? Head to duckduckgo.com/subscribe to see if the Plus or Pro subscription is right for you!

2025 marks DuckDuckGo's 15th year of donations—our annual program to support organizations that share our vision of raising the standard of trust online. We are proud to donate to a diverse group of organizations around the world that promote privacy and security, digital competition, and a healthier online ecosystem.
This year, we’re donating $1,100,000, bringing DuckDuckGo's total donations since 2011 to $8,050,000. Everyone using the Internet deserves simple and accessible online protection; these organizations are all pushing to make that a reality. We encourage you to check out their valuable work below.

Public Knowledge promotes freedom of expression, an open internet, and access to affordable communications tools and creative works. We work to shape policy on behalf of the public interest.

ARTICLE 19 is an international think-do organisation, that takes its name from the Universal Declaration of Human Rights, and works to propel the freedom of expression movement, fighting censorship, defending dissenting voices and advocating against laws and practices that silence.

The Digital Progress Institute seeks to bridge the tech-telecom policy divide through incremental, bipartisan measures in line with its principles of bringing about ubiquitous broadband, 5G and beyond, privacy for every American, real competition in digital markets, and a full-stack framework for Internet policy issues.

EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.

With more than two decades of advocacy experience, European Digital Rights (EDRi) is the go-to, nongovernmental network working on EU and national laws and policies on privacy, freedom of expression, participation online, data protection and technology policy. EDRi unites over 50 organisations from across Europe (and beyond).

The Foundation for American Innovation, a think-and-do tank based in Washington, D.C. and San Francisco, CA, advances technology, talent, and ideas that support a better, freer, and more abundant future.

The Open Home Foundation fights for the fundamental principles of privacy, choice, and sustainability for smart homes - and for every person who lives in one. It is best known as the organization that owns and governs Home Assistant, among many other projects crucial to the open home.

Signal Technology Foundation protects free expression and enables secure global communication through open source privacy technology.

The Surveillance Technology Oversight Project (S.T.O.P.) advocates and litigates for privacy, working to abolish local governments’ systems of discriminatory mass surveillance that disproportionately impact vulnerable communities.

Tech Policy Press publishes reporting, analysis, and perspective on events, issues, and ideas at the intersection of technology and democracy.

Through engaging with lawmakers, exposing false narratives and bad actors, and pushing for landmark legislation, the Tech Oversight Project seeks to hold tech giants accountable for their anti-competitive, corrupting, and corrosive influence on our society and the levers of power.

Our mission at ISRG is to reduce financial, technological, and educational barriers to secure communication over the Internet. We operate three projects (Let’s Encrypt, Prossimo, and Divvi Up) that improve the security and privacy of billions of people using the Internet.

The Algorithmic Justice League is on a global mission to prevent AI harm using research, advocacy, and art.

The British Institute of International and Comparative Law (BIICL) hosts the Competition Law Forum, a centre of excellence for European competition and antitrust policy and law.

The Bull Moose Project Foundation develops and promotes policies that promote fair markets, support American innovation, and hold Big Tech accountable for anti-competitive and anti-consumer conduct.

The Canadian Anti-Monopoly Project (CAMP) is a think tank dedicated to addressing the issue of monopoly power in Canada and around the world. CAMP produces research, commentary, and policy to make our economies more fair, free, and democratic.

Consumers International is the global membership organisation for consumer rights groups. Founded in 1960, we bring together over 200 member organisations in more than 100 countries, with a mission to empower and champion the rights of consumers everywhere and to build a fair, safe and sustainable marketplace.

DPEF empowers people to understand how our communications and governance systems should serve democracy — and how corporate power threatens our economy and our democratic future.

Digital Rights Watch is Australia's leading digital rights organisation. They defend and promote privacy, democracy, fairness and fundamental rights in the digital age.

The Society for Civil Rights e.V. (Gesellschaft für Freiheitsrechte e.V. or "GFF") is a donor-funded organization from Germany that defends fundamental and human rights by legal means. The organization promotes democracy and civil society, protects against disproportionate surveillance and advocates for equal rights and social participation for everyone.

noyb is committed to the legal enforcement of European data protection laws and has filed more than 850 cases against numerous intentional infringements by Big Tech companies - to make online privacy a reality for everyone.

The Internet Archive's mission is to provide “Universal Access yo All Knowledge” by preserving and providing free access to digital materials and cultural heritage serving as a digital library for researchers, historians, scholars, and the public to read, learn, and explore for free.

Open Rights Group is the UK’s largest grassroots digital rights campaigning organisation, working to protect everyone’s rights to privacy and free speech online.

In the past year, OSTIF collaborations led to the fixing of over 130 findings with security impact. Our security uplifts to open source projects wouldn't be possible without the continued support from DuckDuckGo. We are honored to be part of this program and contribute to a more secure Internet ecosystem.

The Perl and Raku Foundation is dedicated to the advancement of the Perl and Raku programming languages, through open discussion, collaboration, design, and code.

Privacy Rights Clearinghouse focuses on increasing access to information, policy discussions, and meaningful rights so that data privacy can be a reality for everyone.

Restore the Fourth advocates with federal, state and local elected officials, to defend privacy and freedom from unreasonable government surveillance.

At the Tor Project, we believe everyone should be able to explore the internet with privacy. We advance human rights and defend your privacy online through free, open source software and the decentralized Tor network.

The Markup challenges technology to serve the public good by producing investigative journalism, unique tools, and accessible resources to inspire action and agency.


We believe the best way to protect your personal information from hackers, scammers, and privacy-invasive companies is to stop it from being collected at all. To make that happen, we offer a layer of protection for everything you do online. Our browser, for example, is packed with a suite of built-in privacy protections, including our search engine that never tracks you. Our growing suite of private, useful, and optional AI tools is the next evolution.
AI tools have quickly become a significant part of people's online experience, but there’s a gap between how often we use AI, and how safe and in control we feel about it. According to recent Pew research, 27% of US adults use AI tools every day, but 59% feel no control over how AI shows up in their lives. That's why we created Duck.ai, which gives you access to popular AI models from OpenAI, Anthropic, Meta, and Mistral, with the following added protections built by us:
Today, we're expanding Duck.ai by giving DuckDuckGo subscribers access to more advanced AI models, covered by the same strong protections. The base version of Duck.ai is not changing; it’s still free to use, with no account necessary. We’re just adding more models for subscribers. You can see which models are available with and without a subscription here.
Please note that Duck.ai is always optional, whether you’re a subscriber to DuckDuckGo or not. If AI is not for you, you can hide the AI buttons and features from your search settings and your desktop and mobile browser settings. If you use the VPN, for example, but you’re not interested in anonymized AI chat, that’s no problem. Just head to your browser’s Settings menu to turn off the AI features and continue using your VPN normally.

Formerly known as Privacy Pro, the DuckDuckGo subscription expands the great protection you get from DuckDuckGo’s free offerings, covering even more of what you do online:
The price is staying the same in all regions: $9.99 USD/month or $99 USD/year, with international pricing information available on this help page.

More advanced AI models like OpenAI’s GPT-4o are built to handle more complicated tasks than their smaller counterparts like GPT-4o mini. These bigger models are better at following detailed instructions, maintaining context through extended chats, and delivering deeper, more nuanced responses. The DuckDuckGo subscription offers a way to use some of these models, but with more privacy. Even larger and more highly advanced models will be made available through higher subscription tiers in the future.
If you’re a frequent user of different advanced chatbots, the DuckDuckGo subscription is an easy one-stop solution. It lets you access multiple premium models in one place, rather than juggling multiple subscriptions and apps. Your subscription lets you visit Duck.ai and use those premium models in any browser you like. But it's especially convenient within the DuckDuckGo browser, where Duck.ai is seamlessly integrated on both desktop and mobile. Using the DuckDuckGo browser, you can access AI chat when and where you need it, getting support for specific tasks without switching platforms. And as always, it’s completely optional – you can adjust or turn off Duck.ai’s integrations from your browser’s settings menu.
Whether you subscribe for premium models or stick with the free tier, you get the same strong privacy protections.
When you get a DuckDuckGo subscription, you get instant, full access to any or all the features you want, without complex add-ons – at a price competitive with any of the individual features on their own. The $9.99 USD monthly price tag is more cost effective than maintaining multiple separate AI subscriptions – many of which are in the $20/month range. (See this help page for more international pricing information.)
Additional features like the DuckDuckGo VPN and Personal Information Removal service add value and convenience – and everything is available in one place, your DuckDuckGo browser.
Want to give it a try for free? You can get a 7-day trial of the subscription in the DuckDuckGo Browser's settings. In the US, you can also access the 7-day trial at DuckDuckGo.com/subscribe.

Duck.ai can be accessed from any browser. Just visit duck.ai or hit the Duck.ai button on any search engine results page on duckduckgo.com. From there, paid subscribers can head to Duck.ai Settings, click “I Have A Subscription”, and follow the prompts to access the premium models.
If you are using the DuckDuckGo browser, you can use more subscription features, like the VPN and Personal Information Removal*. You also have even more ways to get to Duck.ai! You can click the optional Duck.ai buttons in our desktop and mobile browsers, use one of our iOS widgets, or press and hold the DuckDuckGo icon on iOS or Android. However you get there, the process for activating your subscription is the same.
Learn more about the DuckDuckGo subscription and sign up at duckduckgo.com/subscribe
*The DuckDuckGo subscription is available in the U.S., Canada, the E.U. and the U.K. All subscribers can use the VPN and access the same premium AI models, regardless of region. Personal Information Removal is available to U.S.-based subscribers. Identity Theft Restoration coverage varies by region. Learn more here.

Privacy Pro is our privacy-protecting subscription service that includes the DuckDuckGo VPN, Personal Information Removal to protect yourself from data brokers, and Identity Theft Restoration, which you can call if your identity is ever stolen.
In the year since we launched Privacy Pro, we’ve been working hard behind the scenes to make it more comprehensive, more powerful, and easier to use. Have you been waiting for the perfect moment to sign up? Good news: you can now try Privacy Pro free for 7 days. The free trial is available on all platforms – sign up here to redeem the offer. After your free trial, you can continue at $9.99 USD/month or $99.99 USD/year. (International pricing information here.)
Here’s a look at the major improvements we’ve made in the past year! To learn even more about Privacy Pro, you can visit our blog and Help Pages.

Privacy Pro subscriptions are now available in the U.S., E.U., Canada, and the U.K. Features and coverage vary by region, but the DuckDuckGo VPN works the same in all regions. You can now use Privacy Pro in more languages including Dutch, French, German, Italian, Polish, Portuguese, Russian, and Spanish. Learn more about using Privacy Pro outside the U.S. here.

DuckDuckGo VPN users can now choose from more than 40 locations in 30+ countries. Check out the full list here.
We partnered with Securitum to conduct a comprehensive security audit of the DuckDuckGo VPN and supporting infrastructure. We're pleased to report that it found no critical vulnerabilities, underscoring the strong security measures we have in place for our VPN! Visit this help page for a summary of the key findings, remediations, and accepted risks, plus a link to the full report.
The DuckDuckGo VPN now automatically blocks known phishing, malware, and scam sites – no matter what browser you're using. This new setting is on by default on all platforms.
All users can now get notifications that display VPN status at a glance. These notifications are on by default but can be disabled in your VPN Settings.
All desktop users now have a setting that lets the VPN connect automatically when you log in to your computer.
Because some apps and websites aren’t compatible with VPNs, we made sure you can exclude them from our VPN. This lets you use those incompatible apps and websites on desktop without disconnecting from the VPN. (App exclusions are also available on Android. Not compatible with iOS.) Manage website and app exclusions in your VPN settings; you can also manage website exclusions by clicking on the VPN icon in the toolbar.
We created VPN widgets for the iOS home screen and Control Center, so you can quickly connect or disconnect from the VPN and see your VPN connection status at a glance. We also added a Siri Shortcut.
Both iOS and Android users can now “snooze” the VPN for easier access to sites and apps incompatible with VPNs.
To help avoid dropped calls on Android, we introduced a setting that temporarily snoozes the DuckDuckGo VPN during Wi-Fi calls. The best part? We automatically restore your VPN connection when you end your call.
Our new auto-exclude feature on Android automatically detects apps that aren’t compatible with VPNs and bypasses them, so you won’t need to manually adjust settings. (If you would like to adjust this feature, you can! Just go to Settings > VPN > Manage Apps.)
You can now switch between the default DuckDuckGo DNS resolvers and a custom DNS resolver of your choosing in VPN Settings > Advanced Settings.

We completely redesigned the Personal Information Removal dashboard to give Privacy Pro subscribers more insight into the data removal process. You can more easily see when a site was last scanned, how many records have been removed, which sites are clear of your personal information, and more.
Monitor your data broker removal requests with our new Removal Request timeline. You can track the progress of each request, see when your data has been removed, and get help with next steps if any removals take longer than expected.
Privacy Pro now covers over 80 data broker sites and counting, including FastPeopleSearch, MyLife, and OfficialUSA.com. Check out the full list here. Some competitors only re-scan data broker sites on a monthly or quarterly basis…or not at all! But we re-scan the sites every 10 days, submitting new removal requests if your data has reappeared.
Personal Information Removal now more reliably detects when your information has been removed from the data broker sites. Your first scan after signing up or updating your profile now happens 10x faster than before.
Even more improvements are coming soon. We’re working on adding an upgraded AI chat experience to your subscription, with anonymized access to more advanced chat models than the free version on Duck.ai. We’re adding more data brokers to Personal Information Removal all the time, and we’re working on bringing the feature to mobile. Your feedback helps us catch and address bugs, too – so keep it coming!
Go here to redeem your free trial today. Follow us on social [Reddit/X/Facebook/Linkedin] for updates about all things DuckDuckGo, including more Privacy Pro improvements.

Have you been using the DuckDuckGo browser for a while? If so, you may have noticed a few changes around here! As you navigate through the browser, you’ll notice redesigned icons, a softer, rounder interface, and a fresh color palette. Moving between desktop and mobile is more seamless than ever. And new interactive elements show you exactly how DuckDuckGo is protecting you.

We’ve updated our browser’s visual design with a new color palette and softer, rounder shapes, including new icons that we designed in-house. This new look reflects what we believe the internet should feel like with real privacy protection: calm instead of chaotic, streamlined instead of cluttered, secure instead of surveilled.

Hit the green duck-foot shield in the redesigned address bar for real-time information about our tracking protections. Use the redesigned Fire Button to delete your browsing data with one click. Other changes you’ll notice include smoother, softer tab lines and a roomier address bar.

We’ve also made it easier than ever to access our private, useful, and optional AI features. Add a Duck.ai button to your URL bar for quick access to free, anonymized AI chats – available on both desktop and mobile.

These new buttons join several other convenient access points. On iOS, get to Duck.ai via Siri shortcut or widgets for your Lock Screen and Control Center. On Android, you find a shortcut by pressing and holding the DuckDuckGo app icon. (There’s also a Duck.ai button on our search results page when you visit duckduckgo.com, which can be toggled on and off here.)
Don’t use Duck.ai? You can disable the feature and hide the buttons in your browser’s Settings menu.

We love our browser’s new look – and we hope you do, too. If you have comments or questions, you can join our active community on Reddit or reach out on social media (Facebook | Linkedin | X).


It’s not your imagination – online scams are getting more sophisticated. According to new reporting from the United States’ Federal Trade Commission, consumers lost $12.5 billion to fraud in 2024 alone. Scams related to investments, online shopping, and internet services were among the worst offenders.
Around here, we believe the best way to protect your personal information from hackers, scammers, and privacy-invasive companies is to stop it from being collected at all. Our browser and built-in search engine never track your searches, and our browsing protections help stop other companies from collecting your data, too. One of those protections is our Scam Blocker, designed and built by us for your security and your privacy. Scam Blocker guards against phishing sites, malware, and other common online scams without tracking your browsing data or sharing it with any third parties. It’s built into the DuckDuckGo browser and free to use, with no signup required.

Fake cryptocurrency offers, urgent messages about "viruses," and high-paying surveys – like the hypothetical examples above – are some of the common scam sites covered by DuckDuckGo’s Scam Blocker.
Scammers and cybercriminals have constantly evolving tactics, so it’s important to stay protected on multiple fronts. Thanks to Scam Blocker, the DuckDuckGo browser can help you spot and avoid some of the most common types:
The scam tactics vary, but the end goals are usually the same: to commit financial fraud using your personal information or to trick you into paying for products or services that don’t exist. If you accidentally click a link that would take you to one of these scammy sites, DuckDuckGo’s built-in Scam Blocker will stop the page from loading and show you a warning message that allows you to navigate safely away. The DuckDuckGo browser also reduces your malicious ad risk while you browse, blocking tracker-powered ads while before they load.
Other browsers like Chrome, Firefox, and Safari rely on Google’s Safe Browsing Service to provide warnings about phishing sites, which involves sending information to Google. We don’t. We built our own anonymous solution that doesn’t send data to any third parties. No sign in, no tracking, and it’s on by default, so you're protected from the moment you open the browser. DuckDuckGo subscribers can connect to the DuckDuckGo VPN to get these protections for your whole device – including in other browsers!

When you land on a potentially dangerous website, Scam Blocker will display a warning message before loading the site.
New scam sites pop up all the time, but the DuckDuckGo browser stays on top of it. We get a feed of malicious site URLs from Netcraft, an independent cybersecurity company that’s always scanning for new threats. We store that constantly refreshing list on our servers and pass any updates to your browser every 20 minutes.
The way Scam Blocker works is always anonymous. Once your browser downloads the latest dangerous site list from DuckDuckGo, it’s available locally on your device. When you navigate to a site, your browser first checks the site against the list stored on your device. If the site is on the list, your browser shows a warning message that gives you the option to navigate away safely or to continue to the site at your own risk.
Most of the potentially dangerous URLs flagged by Scam Blocker can be found on common sites like Google Drive or GitHub. Uncommon threats – which we encounter less than 0.1% of the time! – require an extra verification step that checks websites against a larger and more comprehensive database on DuckDuckGo servers. But this process is also anonymous; at no time during the threat verification process does your device communicate with any third parties. For a deeper dive on the cryptography we use to maintain anonymity when handling uncommon threats, visit this Help Page.
All this means that your searches and browsing history are still completely anonymous.
Note: This blog post has been edited since initial publication to stay up to date with our evolving product offerings.

At DuckDuckGo, we believe the best way to protect your personal information from hackers, scammers, and privacy-invasive companies is to stop it from being collected at all. We started with a search engine that doesn’t collect your search history; our flagship experience is now a browser with a suite of built-in protections that includes our search engine, ad and cookie blocking, and many more protections.
Our approach to AI extends this strategy by integrating protected AI features that offer the productivity benefits of AI without privacy risks like tracking your prompts and training on your data.
We’re not making AI features just for the sake of making AI features. They have to be actually useful in everyday use, starting with helping people get faster, high-quality answers to their questions. However, we recognize not everyone wants AI in their lives right now, and that’s OK with us. That’s why all our AI features are optional and can be turned off or tuned down.

Head to Duck.ai for free, proxied access to popular chatbots from OpenAI, Anthropic, Meta, and Mistral.
A search engine’s core job is to get you the high-quality information you want fast. AI can help with that job, including a new mode of information-seeking through chat. We’re finding that some people prefer to start in chat mode and then jump into more traditional search results when needed, while others prefer the opposite. (Some questions just lend themselves more naturally to one mode or the other, too.) So, we thought the best thing to do was offer both. We made it easy to move between them, and we included an off switch for those who’d like to avoid AI altogether.
If you want to start with chat, try Duck.ai (previously called DuckDuckGo AI Chat), a free and account-less way to access popular AI chatbots, privately. Models are periodically updated and currently feature GPT-4o mini and o3-mini from OpenAI, open-source models Meta Llama 3.3 and Mistral Small 3, and Claude 3 Haiku from Anthropic. Chats are anonymized via proxying and never used for AI model training.
You can navigate directly to https://duck.ai/ or via the optional chat icons within our search engine or browsers. (There's also a widget - on iOS for now.) You can also use the !ai or !chat bang search commands from any browser where you have DuckDuckGo search set as the default search engine.

One way to access Duck.ai is via the Chat icons in our desktop and mobile browsers.
If you’d rather start with traditional search results, simply use DuckDuckGo search as usual. AI-assisted answers – previously called DuckAssist – will automatically appear on the search results page for relevant English language queries. You can also manually trigger an AI-assisted answer on demand by pressing the “Assist” button under the search box, which appears on most queries. The answers source information from across the web, and like Duck.ai, they are completely free and private, with no sign-up required.

The “Assist” button lets you generate AI-assisted answers on demand.
We’ve continuously heard from users that they want more quick, at-a-glance answers, for a broad range of topics. For years, we’ve been doing that by working on search modules to provide instant answers for things like sports scores, local business information, where to watch movies and TV shows, and much more. Now, we are finding that we can significantly expand the scale of high-quality instant answers we can show with AI as we’re now serving millions of AI-assisted answers daily. Since we’ve introduced AI-assisted answers on our search results, overall user satisfaction with our search results has improved.
If you were unsatisfied after trying DuckDuckGo search in the past, now is a great time to try us again. We’re always improving. If you do try us or try us again, please set DuckDuckGo search as your default search engine or download our browser and make it the device default. It can take a moment to get used to something different, and setting the default is the best way to get over that hump.
Navigate to the AI Features section of your search settings. If you really like our AI-assisted answers, change Assist to Often, which will make them appear over 20% of time. On the other hand, if you never want to see any AI features, turn Chat to Off and Assist to Never.
On DuckDuckGo browsers, you can choose whether the chat icon appears on the toolbar from within the ‘Duck.ai’ section in your browser settings.

Control how often you see AI-assisted answers from your search settings.
In addition to respecting our users’ choices, we respect publishers’ wishes to opt out of AI-assisted answers on DuckDuckGo and don’t penalize publishers for that choice. Even if they opt out as a source for our AI-assisted answers, they can stay opted into our other search results.
When we generate AI-assisted answers, we anonymously call the underlying AI models used to summarize web sources on your behalf, so your personal information is never exposed to third parties. This method is called proxying. Duck.ai chats work similarly. To accomplish this technically, we remove your IP address completely and use our own IP address instead. This way, the proxied requests are coming from us, not you. For more information, please see the DuckDuckGo General Privacy Policy.

Duck.ai's "Recent Chats" let you pick up where you left off. Chats are saved locally on your device – not on DuckDuckGo or any other outside servers.
Within Duck.ai, recent chats are only stored locally on your device, not on DuckDuckGo servers. Not interested in storing your chats? You can disable the option altogether, or use the Fire Button to clear all your recent chats at once. Duck.ai chats are not used for any AI training, either by us or the underlying model providers. To respond with answers and ensure all systems are working, these providers may store chats temporarily, but we remove all the metadata so there’s no way for them to tie chats back to you personally. On top of that, we have agreements in place with all providers to ensure that any saved chats are completely deleted within 30 days. For more information, please see the DuckDuckGo AI Chat Privacy Policy and Terms of Use.

Clear your recent Duck.ai chats with the click of a button.
When you search on DuckDuckGo, our AI-assisted answers are based on real-time web crawling, so they’re as reliable as the sources from which they are drawn. But even the most reliable sources can have errors, and mistakes can occasionally happen in the summarization process, too. That’s why we prominently display our cited sources: you can easily check them out and use your own judgment to make the final call.

Want to know where your AI-assisted answer came from? Check the sources below the answer and click through for a deeper dive into complex topics.
We also have a number of precautions in place. Out of the countless websites we could draw from, we try to weed out ultra-low-quality sources like spammy content farms and invasive people search sites, and we try to avoid satirical sites and opinion pieces.
You are a critical part of the process as well. “Was this helpful? 👍 👎” is displayed next to every AI-assisted answer. So, if you see a bad answer – or a great answer! – please let us know. We review it all as part of our quality control process.
Yes! AI-assisted answers are integrated into DuckDuckGo search, which is always free to use, with no log-in required. (We make money from private search ads.) Chatting on Duck.ai is also free within a daily limit, which we implement while maintaining strict user anonymity, just like we do for our search engine. We plan to keep the current level of access free; we’re exploring a paid plan for access to higher limits and more advanced (and costly) chat models.
We are largely driving our AI roadmap based on your feedback, so please keep it coming—we appreciate it. Within Duck.ai, this includes adding newer models, voice and image support, and granting models web access. For AI-assisted answers on our traditional search engine, we’re making them faster and more interactive, answering more queries, and improving when they appear automatically, including for less straightforward queries.
In the meantime, give Duck.ai a try and keep an eye out for AI-assisted in your traditional search results. Head to your search settings if you want to see them more or less often.

2024 marks DuckDuckGo's 14th year of donations—our annual program to support organizations that share our vision of raising the standard of trust online. We are proud to donate to diverse group of organizations around the world that promote privacy, digital rights, access to information online, and a healthier online ecosystem.
This year, we’re donating $1,100,000, bringing DuckDuckGo's total donations since 2011 to $6,950,000. Everyone using the Internet deserves simple and accessible online protection; these organizations are all pushing to make that a reality. We encourage you to check out their valuable work below, alongside details about how our funds were allocated this year.

“EFF's mission is to ensure that technology supports freedom, justice, and innovation for all people of the world.”

"Public Knowledge promotes freedom of expression, an open internet, and access to affordable communications tools and creative works. We work to shape policy on behalf of the public interest."

"Established in 1987, ARTICLE 19 is an international non-profit organization that defends freedom of expression, fights against censorship, protects dissenting voices, and advocates against laws and practices that silence individuals, both online and offline."

"DPEF educates our members and the general public about matters pertaining to the democratic nature of our nation’s communications infrastructure and governance structures, and the impacts of corporate power over our economy and democracy."

"The EDRi network is a dynamic and resilient collective of 50+ NGOs, as well as experts, advocates and academics working to defend and advance digital rights across Europe and beyond. For over two decades, it has served as the backbone of the digital rights movement and has achieved landmark successes in digital rights in Europe."

"Known for organizing some of the largest and most effective online campaigns in history, Fight for the Future’s mission is to ensure a just Internet and technology that is a force for empowerment and liberation, free of surveillance, censorship, and abuse of personal data."

"The Markup challenges technology to serve the public good by producing investigative journalism, unique tools, and accessible resources to inspire action and agency."

"OpenMedia is a community-driven organization that works to keep the Internet open, affordable, and surveillance-free. We operate as a civic engagement platform to educate, engage, and empower Internet users to advance digital rights around the world."

“Restore the Fourth opposes mass government surveillance, and organizes locally and nationally to defend privacy and the Fourth Amendment.”

“Signal Technology Foundation protects free expression and enables secure global communication through open source privacy technology.”

“The Surveillance Technology Oversight Project (S.T.O.P.) advocates and litigates for privacy, working to abolish local governments’ systems of discriminatory mass surveillance."

“Tech Policy Press promotes discussion, debate, and analysis of issues and ideas at the critical intersection of technology and democracy.”

"Through engaging with lawmakers, exposing false narratives and bad actors, and pushing for landmark legislation, the Tech Oversight Project seeks to hold tech giants accountable for their anti-competitive, corrupting, and corrosive influence on our society and the levers of power."

“AJL’s harms reporting platform aims to capture people's lived experiences with AI harms, connect them with resources, and identify areas where there are no or few resources.”

“Bits of Freedom shapes tech policy in order to facilitate an open and just society, in which people can hold power accountable and effectively question the status quo.”

"The Competition Law Forum is a centre of excellence for European competition and antitrust policy and law at the British Institute of International and Comparative Law (BIICL)."

“UCLA Center for Critical Internet Inquiry (C2i2), housed in the UCLA Division of Social Sciences, is a critical internet studies community committed to reimagining technology, championing social justice, and strengthening human rights through research, culture, and public policy.”

“Creative Commons (CC) is an international nonprofit organization dedicated to building and sustaining a thriving commons of shared knowledge and culture that serves the public interest.”

"Digital Rights Watch is Australia's leading digital rights organisation. They defend and promote privacy, democracy, fairness and fundamental rights in the digital age."

"The Society for Civil Rights e.V. (Gesellschaft für Freiheitsrechte e.V. or "GFF") is a donor-funded organization from Germany that defends fundamental and human rights by legal means. The organization promotes democracy and civil society, protects against disproportionate surveillance and advocates for equal rights and social participation for everyone."

"noyb is committed to the legal enforcement of European data protection laws and has filed more than 850 cases against numerous intentional infringements by Big Tech companies - to make online privacy a reality for everyone."

“The Open Home Foundation fights for the fundamental principles of privacy, choice, and sustainability for smart homes - and for every person who lives in one. It is best known as the organization that owns and governs Home Assistant, among many other projects crucial to the open home."

"Open Rights Group is the UK’s largest grassroots digital rights campaigning organisation, working to protect everyone’s rights to privacy and free speech online."

"Open Source Technology Improvement Fund helps critical open source projects with their security needs and is grateful for the continued support from DuckDuckGo. This funding is pivotal to ongoing operations, as it is one of our only donation sources that is not tied to any deliverable or project. Over the past year, OSTIF has been able to sustainably help critical open source projects improve their security posture, and in the process have found and fixed over 150 bugs and vulnerabilities."

"The Perl and Raku Foundation is a non-profit, 501(c)(3) which fulfills a range of activities including the collection and distribution of development grants, sponsorship and organization of community-led local and international Perl conferences, and support for community resources and user groups."

"Privacy Rights Clearinghouse focuses on increasing access to information, policy discussions, and meaningful rights so that data privacy can be a reality for everyone."
"Proof is a new nonprofit journalism studio that is working to redefine and reimagine trustworthiness in news and investigative reporting."

"At the Tor Project, we believe everyone should be able to explore the internet with privacy. We advance human rights and defend your privacy online through free, open source software and the decentralized Tor network."

Today, we are calling on the European Commission to launch three non-compliance investigations around Google’s obligations under the EU’s Digital Markets Act (DMA):
The DMA created these obligations to address Google’s scale and distribution advantages, which the judge in the United States v. Google search case found to be illegal. The judge specifically highlighted that 70% of queries flow through search engine access points preloaded with Google, which creates a “perpetual scale and quality deficit” for rivals that locks in Google’s position.
Unfortunately, Google is using a malicious compliance playbook to undercut the DMA. Google has selectively adhered to certain obligations – often due to pressure from the Commission – while totally disregarding others or making farcical compliance proposals that could never have the desired impact. As a result, the DMA has yet to achieve its full potential, the search market in the EU has seen little movement, and we believe launching formal investigations is the only way to force Google into compliance. The Commission has already demonstrated its ability to use such investigations effectively under the DMA.
While Google’s bad faith approach is not surprising, it should not go unnoticed. Any regulator looking to create enduring competition in the search market should take note of the tactics Google is using to thwart and circumvent its legal obligations.
Google’s exclusive default distribution deals mean they see many times more search queries than any competitor can, which gives them what’s called a “scale advantage.” In Article 6(11), the DMA directly addresses this scale advantage by mandating Google share anonymized click, query, ranking, and view data. This data would help search engines improve results quality, especially for less frequent (so-called “long-tail”) queries.
Google’s Click-and-Query obligation under the DMA, Article 6(11), reads:
“The gatekeeper shall provide to any third-party undertaking providing online search engines, at its request, with access on fair, reasonable and non-discriminatory [FRAND] terms to ranking, query, click and view data in relation to free and paid search generated by end users on its online search engines. Any such query, click and view data that constitutes personal data shall be anonymised.”
To comply with this requirement, Google announced the “Google European Search Dataset Licensing Program.” However, this data set has little to no utility to competing search engines due, in large part, to Google’s proposed anonymization method, which only includes data from queries that have been searched more than 30 times in the last 13 months by 30 separate signed in users. This method is conveniently overbroad: we extrapolate that Google’s dataset would omit a staggering ~99% of search queries including “longtail” queries that are the most valuable to competitors. Google is trying to avoid its legal obligation in the name of privacy, which is ironic coming from the Internet’s biggest tracker.
Part of our goal at DuckDuckGo has always been to prove that tech can make great products without exploiting people’s data or using mass surveillance. Our Privacy Policy explains how we go about doing this, for example, “we have no way to create a history of your search queries.” We do this by stripping out any metadata that can tie searches together made by the same individual, so re-identification cannot happen like in the memorable AOL case. For example, we may know that we got a lot of searches for "cute cat pictures" today, but we don’t know - and have no way to figure out - who actually performed those searches.
The fact is that most "rare" queries are actually just common words put in an order that isn’t searched very often. These queries are not inherently problematic since they cannot be traced back to any individual. So, instead of attempting to filter all of these relatively unique queries, we should instead focus on removing the subset of those queries that contain personal identifiers, like addresses and phone numbers or accidental pastes like user ids and passwords. Fortunately, there are relatively straightforward approaches to remove these types of queries that will result in much of the long tail data remaining available to improve search results.
This isn’t even the only part of the proposal that severely hampers the usefulness of the data:
We recognize that fine-tuning the right approach requires further considerations and, most importantly, testing and good faith cooperation from Google. Faced with Google’s continued obstruction, we believe that opening an official investigation is the only way to arrive at a workable proposal. We would like to help in that effort and believe there are ways for Google to provide a data set that is both privacy respecting and useful to competitors.
The DMA includes provisions designed to facilitate easy switching of search engines and browsers, targeting Google’s entrenched hold over search and browser access points. Google’s obligation under Article 6(3) of the DMA reads:
“The gatekeeper shall allow and technically enable end users to easily change default settings on the operating system, virtual assistant and web browser of the gatekeeper.”
Despite this obligation, switching search engines on Android devices (which make up more than 60% of the mobile market in the EU) is still not “easy.” Before the DMA came into effect, it took more than 15 steps to switch your default search engine on Android and today that is still the case.
Zero changes have been made. What should happen is that users should be able to change their default search engine across every search access point in one click, similar to how a choice screen works, but currently choice screens are only shown on device onboarding. Users should be able to get back to a similar screen via a top-level device setting for default search, which we should be also able to guide users to directly from our app.
Similarly on Chrome, switching the default search engine has not been made any easier either. For example, there’s still no way to guide a user directly to the default search engine setting from the DuckDuckGo search homepage. And Google’s persistent dark pattern for search extensions on Chrome remains.
Google has completely ignored its easy switching obligations under the DMA. As a result, we believe the Commission must launch a non-compliance investigation to get Google to fulfill its requirements under the law. “Easy switching” should mean competition is actually one click away.

Article 6(3) DMA requires Google to show choice screens to end users “at the moment of the end users’ first use of an online search engine or web browser.”
Google’s search engine DMA choice screen is explicitly different from the choice screen Google implemented following the Android case. Key improvements have been made to its design, such as automatically showing taglines. But Google has not rolled out this updated DMA choice screen to all Android users, in breach of Article 6(3). Apple, for example, rolled out its DMA browser choice screen to its entire EEA user base and is planning to do so again after an investigation from the Commission – this time to Safari default users only.
A non-compliance investigation must therefore be opened to ensure that Google will fulfill its obligation and roll out both the DMA search engine and browser choice screens to all Android devices at once like they did on Chrome for desktop and iOS. When those Chrome choice screens rolled out, the positive competitive impact was evident: DuckDuckGo search queries on Chrome have increased by around 75% across the EEA. This rapid and stable growth in query volume shows pent-up demand by Chrome users for privacy-respecting search alternatives.
Regulators around the world should be looking at what’s happening with the DMA, learn from how Google has been able to exploit its loopholes and circumvent it, and then take steps to make sure Google cannot continue to put up roadblocks in the way of progress and fair competition.
In the EU, Google chose to roll out self-serving compliance proposals around these obligations without engaging in meaningful consultations, leading to significant delays in achieving contestability and fairness, the objectives of the DMA. Given the opportunity, it should not come as a surprise that Google is taking advantage.
Instead, regulators and market participants should be able to review, test, and validate remedies before they are implemented to ensure they actually accomplish their intended purpose, while maintaining the regulatory authority to launch investigations and make changes after implementation, if necessary. Regulators can set additional criteria to make sure these interventions have the desired impact. For example, dominant firms could be required to demonstrate that consumers understand how to switch and that switching to a competitor is equivalently easy to sticking with the services from the dominant firm.
In addition, we believe the DMA doesn’t properly address Google’s scale advantage. Sharing click-and-query data is a critical intervention to address Google’s scale advantage, but alone, it isn’t sufficient to create a competitive search engine. As we’ve previously written, we believe the best and fastest way to level the playing field on search quality is for Google to provide access to its search results via real-time APIs (Application Programming Interfaces), also on FRAND (Fair, Reasonable, and Non-Discriminatory) terms. That means for any query that could go in a search engine, a competitor would have access to the same search results.
If Google is required to license its search results in this manner, this would allow existing search engines and potential market entrants to build on top of Google’s various modules and indexes, and offer consumers more competitive and innovative alternatives. In addition, while choice screens are an excellent mechanism to provide consumers access to competitors, they need to be shown periodically, at least yearly, to give competing search engines a chance to build awareness over time. We are happy to work with regulators to craft remedies that will create enduring search competition.

At DuckDuckGo, we know what it's like to turn a vision into a successful company. Our founder and CEO, Gabriel Weinberg, began DuckDuckGo’s journey to “raise the standard of trust online” from his basement in Pennsylvania and turned it into a browser and search engine used by millions of people around the world.
Today, this vision still inspires us. Each year, we donate to non-profit organizations that align with this vision, and now we're investing in companies that align with it as well.
As more and more consumers seek privacy-conscious technologies, we want to partner with other like-minded entrepreneurs and help turn their visions into reality. With the core objective of supporting consumer privacy technologies, DuckDuckGo is actively investing in early-stage companies as well as pursuing acquisitions and partnerships. We've actually already been doing this quietly for the last couple years, and we’re energized to do more. So, we'd love to hear from you and find ways to work together.
We are focused primarily on three domains:
For early-stage investments, we are flexible on deal structure, aim to move quickly and are happy to co-invest with other companies, funds, and individuals. For acquisitions, we are open to a range of companies that share a commitment to protecting user privacy.
You can reach Mike Marino, SVP of Finance and Diana Chiu, Director of Corporate & Business Development directly at investments@duckduckgo.com.

Since the ruling in the U.S. v. Google search case was announced, there has been discussion about how to remedy Google’s dominance. As a company that operates a search engine that directly competes with Google, we have several ideas about how to craft a set of legal and technical interventions that can, in combination, effectively curb the advantages Google has gained through illegal use of their search monopoly. DuckDuckGo believes it is possible to put remedies in place that will establish enduring search competition, encourage innovation and new market entrants, and result in significant market share among multiple competitors.
However, there is no silver bullet remedy that, alone, will adequately address both Google’s scale and distribution advantage as well as ensure that Google cannot circumvent its obligations. Instead, the “remedy” must be a package of remedies that work together to effectively counteract the unlawful competitive imbalance.
Many ideas on the table aim to counteract Google’s distribution advantage, but we believe it’s equally important to address Google’s scale advantage. Google’s exclusive default distribution deals mean they see way more queries than everyone else, a.k.a. their scale advantage. The court’s opinion quantifies this disparity:
More users mean more advertisers, and more advertisers mean more revenues…. Google’s scale means that it not only sees more queries than its rivals, but also more unique queries, known as “long-tail queries.” To illustrate the point, Dr. Whinston analyzed 3.7 million unique query phrases on Google and Bing, showing that 93% of unique phrases were only seen by Google versus 4.8% seen only by Bing.
Google uses this stream of information to continuously improve their results by running large-scale experiments in ways that no rival can because we’re effectively blinded. Google infers the best results based on queries it has seen before. If a search engine sees fewer – or often zero – similar queries, these inferences are less effective.
As the court describes the situation, Google’s scale advantage fuels a powerful feedback loop of different network effects that ensure a “perpetual scale and quality deficit” for rivals that locks in Google’s advantage.

Google’s exclusive defaults are part of a reinforcing feedback loop that gives them an insurmountable scale advantage and makes it difficult for rivals to compete.
The best and fastest way to level this playing field is for Google to provide access to its search results via real-time APIs (Application Programming Interfaces) on fair, reasonable, and non-discriminatory (FRAND) terms. That means for any query that could go in a search engine, a competitor would have access to the same search results: everything that Google would serve on their own search results page in response to that query. If Google is forced to license its search results in this manner, this would allow existing search engines and potential market entrants to build on top of Google’s various modules and indexes and offer consumers more competitive and innovative alternatives.
Today, we believe that we already offer a compelling search alternative with more privacy and fewer ads, relative to Google. We’ve also been working for fifteen years to make our search results on par in terms of feature set and quality by combining our own search indexes with those of partners like Apple, Microsoft, TripAdvisor, Wikipedia, and Yelp. However, we know that many consumers still prefer Google’s results due to the benefits of scale discussed above, and this intervention would erase that advantage, instantly making us and others much more competitive.
We’ve already seen some concerns about this remedy direction that we’d like to quickly address. First, licensing Google’s search results does not involve accessing any user data. This remedy will not invade user’s privacy, which is aligned with our vision as a company. We know from experience that this remedy can be implemented anonymously, and we can advise on that implementation. We can open up Google without opening up user data.
A second potential concern is that long-tail results on leading search engines could be similar in some cases, but that’s a feature not a bug. Google’s scale advantage gives them insights into which obscure links should be ranked higher, and so we should expect that when smaller search engines incorporate this information that some results would become more similar. However, licensing on FRAND terms should also allow competitor search engines to re-rank and mix results with other content, which will enable competitor search engines to produce different ranking algorithms based on the same underlying high-quality search results.
Additionally, FRAND licensing will allow other search engines to more competitively differentiate on things like privacy, design, and customization of the user interface and results page, while still providing high-quality results. For example, we can envision a universe of differentiated and innovative experiences, such as features that allow users to tweak ranking algorithms, features that bring more transparency to ranking algorithms, and other AI capabilities, all leveraging Google’s search result APIs. Future-looking use cases like these must be kept in mind, and FRAND API access is what is needed to power these types of search innovations.
A third concern is that competitor indexes could become too reliant on Google; however, if all the results that come through the APIs can also be used as an input into building search indexes, this would ensure that there is also a path to long term viability and independence for competitors. We, for one, would go further down this path. This could be accelerated if the APIs also provide access to Google’s anonymous ranking signals (for example, how often and quickly people in aggregate click back after visiting a link), which will help tune competitor indexes even faster as well as improve real-time reranking algorithms. That said, we recognize that licensing Google’s search results needs to be a long-term intervention because their scale advantage will persist as long as Google has much more significant market share than competitors.
There are historical precedents for this type of remedy as well. AT&T’s 1956 antitrust agreement required the company to license its patents on FRAND terms, which allowed existing and new companies to build on top of AT&T’s innovations. Similarly, the Telecommunications Act of 1996 encouraged competition in communications markets by requiring large telecommunications providers to interconnect their networks with new competitors on FRAND terms.
This is not a new technical challenge for Google either: Google already licenses their search results, including their ads, via real-time APIs to some competitors. It’s also not novel in antitrust, as API access was at stake in Microsoft’s antitrust settlement two decades ago. An API-based remedy also means that startups could immediately enter the search market rather than be forced to invest tens or hundreds of millions of dollars upfront to get started by acquiring and consuming massive data sets. It also protects nascent competition in AI-driven search by allowing them to use the APIs to ground answers in real-time.
Finally, we should note that the EU’s Digital Markets Act attempts to solve Google’s scale advantage by requiring Google to provide FRAND access to its “click and query data.” To date, this has been ineffective because Google has undermined the requirement by limiting the data they share to the point of being useless. However, while we believe that click and query data is not a substitute for FRAND access to search result APIs, we also believe that if implemented correctly it can complement and further accelerate the path to competitor independence. That’s because API access will be limited to queries a competitor search engine actually sees, whereas click and query data can be much broader, covering almost all the queries Google sees. Therefore, access to this data in a privacy-protective manner should also be given on FRAND terms.
Google likes to claim everyone chooses Google, but most consumers don’t: they just go with the default. The court outlines how staggering this default advantage is:
50% of all queries in the United States are run through the default search access points covered by the challenged distribution agreements…. An additional 20% of all searches nationwide are derived from user-downloaded Chrome, a market reality that compounds the effect of the default search agreements. That means only 30% of all [general search engine] queries in the United States come through a search access point that is not preloaded with Google. Additionally, default placements drive significant traffic to Google. Over 65% of searches on all Apple devices go through the Safari default. On Android, 80% of all queries flow through a search access point that defaults to Google.
The court also consolidates evidence highlighting that large percentages of consumers don’t even realize they are using Google because of these defaults:
Users are confused and competition is crushed. As a result, Google shouldn’t be able to self-preference its search engine on Chrome and Android, which were developed to expand the reach of Google Search. Within these products, there should be no preset search default. Instead, these platforms need user-friendly settings based on sound principles that provide for:

Image of the search engine choice screen on Android in the EU.
Banning self-preferencing must also include a prohibition on dark patterns, and all remedies must be subject to anti-circumvention provisions. For example, these restrictions should prohibit Google from discouraging users from installing rival apps or search extensions, or encouraging them to switch back to Google.
Unfortunately, a self-preferencing ban won’t create enduring competition by itself. However, as rivals can innovate on top of Google’s search results, and consumers become aware of rival brands and their increased quality, this increased access to consumers will accelerate competition in the search market.
The court has already declared Google’s exclusionary contracts unlawful. While there are methods outside of these exclusive defaults to access search engines, the court recognizes that these “channels are far less effective at reaching users. That is due in part to users’ lack of awareness of these options and the ‘choice friction’ required to reach these alternatives.”
Restricting these exclusive agreements is therefore essential to help open up access to the search market. However, just restructuring these contracts by itself won’t do much because it won’t directly counteract Google’s entrenched advantage. For that, we need to look to the remedies discussed above.
Even the most well-crafted remedies will ultimately fail if Google is in charge of designing and implementing them, as has been the case in the EU. We’ve seen firsthand how Google has easily and repeatedly avoided complying with both the letter and the spirit of the law. Consequently, an independent monitoring body made up of technical experts and affected market participants must be fully empowered to keep Google honest. We should expect that this monitoring entity will need to be in place for as long as the remedies are in place. We cannot let the fox guard the henhouse.
We are not opposed to structural remedies, but they would need to be paired with the additional interventions outlined in this post. In other words, structural changes to Google could theoretically be an accelerant in some circumstances, but regardless are not a replacement for FRAND access to search results and click and query data together with a ban on Google-self preferencing and a restriction on exclusive contracts. And we can envision some scenarios where a particular structural remedy could be more harmful to us than helpful.
Counteracting the entrenched competitive imbalance that Google’s default advantage has afforded them will not happen overnight. Realistically, it will take years for competition to take hold, and a fully-funded and motivated Department of Justice will need to be involved for the long haul. However, we are confident that a package of well-implemented and carefully monitored remedies, each designed to address a specific choke point, can work to create enduring competition in the search market.

DuckDuckGo AI Chat is an anonymous way to access popular AI chatbots – currently, Open AI's GPT 3.5 Turbo, Anthropic's Claude 3 Haiku, and two open-source models (Meta Llama 3 and Mistral's Mixtral 8x7B), with more to come. This optional feature is free to use within a daily limit, and can easily be switched off.
Find AI Chat on your search results page for easy switching between the two.
Our mission is to show the world that protecting your privacy online can be easy. We believe people should be able to use the Internet and other digital tools without feeling like they need to sacrifice their privacy in the process. So, we meet people where they are, developing products that add a layer of privacy to the everyday things they do online. That’s been our approach across the board – first with search, then browsing, email, and now with generative AI via AI Chat.
DuckDuckGo AI Chat is a free, anonymous way to access popular AI chatbots. According to recent Pew reporting, adults in the U.S. have a negative view of AI's impact on privacy, even as they're feeling more positive about AI's potential impact in other areas. "About eight-in-ten of those familiar with AI say its use by companies will lead to people’s personal information being used in ways they won’t be comfortable with (81%) or that weren’t originally intended (80%)." Even so, another recent report shows a steady uptick in the share of U.S. adults who are using chatbots for work, education, and entertainment. If you're interested in AI chatbots but share those privacy concerns, DuckDuckGo AI Chat is for you.
In the industry-wide race to integrate generative AI, there’s a lot of pressure to add AI features just for the sake of saying you have them. We’re taking a different approach. Before adding any AI-assisted features to our products – first DuckAssist, our AI-enhanced Instant Answer, and now AI Chat – we think carefully about how to make them additive to the search and browse experience, and we roll them out cautiously to ensure this is the case. We also recognize these features aren’t for everyone, so we’ve made our AI-assisted features totally optional; if you’re not interested, you can easily switch them all off.
We view AI Chat and search as two different but powerful tools to help you find what you’re looking for – especially when you’re exploring a new topic. You might be shopping or doing research for a project and are unsure how to get started. In situations like these, either AI Chat or Search could be good starting points. If you start by asking a few questions in AI Chat, the answers may inspire traditional searches to track down reviews, prices, or other primary sources. If you start with Search, you may want to switch to AI Chat for follow-up queries to help make sense of what you’ve read, or for quick, direct answers to new questions that weren’t covered in the web pages you saw. It’s all down to your personal preference. That’s on top of AI Chat’s unique generative capabilities, like drafting emails, writing code, creating travel itineraries, and much more.
Since it can be useful to switch back and forth, we’ve made AI Chat accessible through DuckDuckGo Private Search for quick access: after you make a search, just click on the Chat tab underneath the search bar to keep exploring the topic. You can also get to AI Chat directly by navigating to duck.ai or duckduckgo.com/chat; from there, it’s easy to jump back into traditional search using the top navigation.

AI Chat is always anonymous. Want to start over? Hit the Fire Button to delete your current conversation.
When you land on the AI Chat page, you can pick your chat model – currently, OpenAI’s GPT 3.5 Turbo, Anthropic’s latest generation Claude 3 Haiku, and open-source options Mixtral 8x7B and Meta Llama 3 – and start using it just like any other chat interface. Just like searches on DuckDuckGo, all chats are completely anonymous: they cannot be traced back to any one individual. To accomplish that technically, we call the underlying chat models on your behalf, removing your IP address completely and using our IP address instead. This way it looks like the requests are coming from us and not you. Within AI Chat, you can use the Fire Button to clear the chat and start over.
In addition, DuckDuckGo does not save or store any chats. To respond with answers and ensure all systems are working, the underlying model providers may store chats temporarily, but there’s no way for them to tie chats back to you, personally, since all metadata is removed. (Even if you enter your name or other personal information into the chat, the model providers have no way of knowing who typed it in – you, or someone else.) We have agreements in place with all model providers to ensure that any saved chats are completely deleted by the providers within 30 days, and that none of the chats made on our platform can be used to train or improve the models. For more information, please see the DuckDuckGo AI Chat Privacy Policy and Terms of Use.
Yes! AI Chat is free to use, within a daily limit – which we implement while still maintaining strict user anonymity, just like we do for our search engine. We are planning to keep the current level of access free and exploring a paid plan for access to higher limits and more advanced (and costly) chat models.
We’re excited to spread the word about AI Chat, but there are already improvements on the way. Keep an eye out for new capabilities, like custom system prompts, and general improvements to the AI Chat user experience. We’re also planning to add more chat models – potentially including either DuckDuckGo- or user-hosted options. If you’re interested in seeing a particular chat model or feature added in the future, please let us know via the Share Feedback button in the AI Chat screen.
Ready to give it a spin? Head to duck.ai or duckduckgo.com/chat. You can also find it on your search results page – the Chat tab is just under the search box, on the right side, alongside Images and Videos on the left. If you’re a fan of our bangs, you can also initiate an AI chat by starting your search query with !ai or !chat. Not for you? Head to the Search settings menu to disable AI Chat, DuckAssist, or both.
Happy chatting!

Privacy Pro bundles three new protections from DuckDuckGo into one easy subscription. Subscribers get:
Getting these services separately from other companies could cost upwards of $30/month in the U.S.; our users can subscribe to Privacy Pro for $9.99/month or $99.99/year. Privacy Pro is currently available in the United States, Canada, the European Union, and United Kingdom; see this list for the latest availability. Sign up at duckduckgo.com/pro and make sure you're using the most up-to-date version of the DuckDuckGo browser on all your devices. Features and coverage vary by country.
Every day, tens of millions of people rely on DuckDuckGo to add a layer of privacy to their online activities. The centerpiece of our product offering is now the DuckDuckGo browser, which offers the most comprehensive set of free privacy protections by default. (One immediate benefit? Fewer ads and popups than you’d see on other browsers.) Our browser bundles our private search engine, tracker blocking, Email Protection, and more than a dozen other free privacy features in one convenient package. However, there’s only so much protection we can provide for free. For example, some protections, like securing our users’ network connections with a VPN, require significantly more bandwidth and other resources.
Enter Privacy Pro: a three-in-one subscription service that offers even more seamless privacy protection. Privacy Pro subscribers get a fast, secure, and easy-to-use VPN that doesn’t log your activity; Personal Information Removal, which helps U.S.-based users remove your information from “people search” data broker sites that store and sell it; and Identity Theft Restoration, which helps to fix credit report mistakes and recover any resulting financial losses. (Please note: Setting up and managing Personal Information Removal requires a Mac or Windows computer.)
On its own, the DuckDuckGo browser lets you search and browse privately. By adding Privacy Pro, you can also limit data brokers’ access to your personal information and secure your Internet connection across your whole device, which hides your location and device IP address from sites you visit — all in one place.

Adding a Privacy Pro subscription makes the DuckDuckGo browser's best-in-class protections even stronger.
At DuckDuckGo, we don’t track you; that’s our privacy policy in a nutshell, and this new subscription service is no exception. Guided by the principle of data minimization, we designed Privacy Pro to maximize your privacy:
We’re here to seamlessly protect your privacy — not compromise it.
Read the Privacy Policy and Terms of Service for Privacy Pro.

Our non-logging VPN secures your Internet connection on up to five devices at once.
Get an extra layer of online protection with the VPN made for speed, security, and simplicity — built and operated by DuckDuckGo, not an outside provider. Our VPN encrypts your Internet connection for all your browsers and apps across your entire device, hiding your location and IP address from the sites you visit. Because connections are encrypted, your Internet service provider (ISP) can’t see your online traffic either. And we have a strict no-logging policy; we don’t log or store data that can connect you to your online activity, or to any other DuckDuckGo services, such as search.
No need to install a separate VPN app. Once you sign up for Privacy Pro, you can install our VPN right in your DuckDuckGo browser. After that, you can secure your connection in just one click and check its status at a glance. It offers full-device coverage on up to five devices at once.
Our VPN is simple to use. If your VPN connection gets interrupted for any reason, it attempts to reconnect automatically and prevents data leaks until the reconnection is successful. And it works perfectly with DuckDuckGo’s other protections; if you’re an Android user, you should know our VPN is the only one compatible with App Tracking Protection.
We have VPN servers worldwide, and we’ll be adding more over time. To maximize speed and stability, you’ll connect to the closest available VPN server by default, but you can manually choose whichever location you prefer.
To encrypt your traffic and route it through a VPN server, we use the open-source WireGuard protocol, which is fast and secure. We also route your DNS queries automatically through the VPN connection to our own DNS resolvers, which further hides your browsing history from your ISP.
Learn more about the VPN on our Help Pages.

Personal Information Removal helps get your name, address, and more off of people search sites.
Ever tried looking yourself up online? Where our other web tracking protections help defend against trackers that gather your personal information while you browse, Personal Information Removal goes one step further: It works to actually remove personal information, such as your name and home address, from people search sites that store and sell it, helping to combat identity theft and spam.
How does it work? People search sites, like Spokeo and Verecor, are a common type of data broker. They collect your personal information from local and federal records, public forums like social media, and even other data brokers, and make it available online. (If you’re in the U.S., where people search sites can operate freely, you’ve probably seen them in search results when you look up your name.) We scan dozens of these sites for your info and, if found, request its removal, even handling back-and-forth confirmation emails for you automatically behind the scenes. Unlike other similar services, we only contact the data brokers once we confirm that you’re in their databases, and the info you enter for scanning is stored on your device — not on remote servers.
To help us build Personal Information Removal from the ground up while maintaining our strict privacy standards, DuckDuckGo acquired data removal service Removaly in 2022. Removaly was a pioneer in the data removal space, developing a way to navigate data brokers’ confusing opt-out process automatically without compromising users’ privacy in the process.
Personal Information Removal re-scans sites regularly to minimize the risk of your info reappearing, using the data stored on your device. Your device also initiates any removal requests. You can keep tabs on the progress of ongoing removals — and see the personal information we’ve already removed! — on your personal dashboard within the DuckDuckGo browser. Once it’s set up, simply select Personal Information Removal from the browser’s three-dot menu in the upper right.
You'll need to set up Personal Information Removal on one primary Mac or Windows computer. Right now, the dashboard can only be accessed from that device, but we are planning to add the ability to view it from your other devices.
Learn more about Personal Information Removal on our Help Pages. This feature is only available to U.S. subscribers.

Get some peace of mind: if your identity is ever compromised, Identity Theft Restoration is standing by to help.
With more than 1 million cases a year reported in the U.S., identity theft is more common than you might think. And Personal Information Removal helps reduce the chance of identity theft, but unfortunately, nothing can totally prevent it. So, let us give you some peace of mind: If your identity is stolen or compromised, Identity Theft Restoration will help you handle the stress and expense.
Identity Theft Restoration is brought to our users in partnership with Iris® Powered by Generali, one of the oldest firms specializing in identity theft in the U.S. Iris’s identity theft advisors are available 24/7, every day of the year, and answer calls within 11 seconds on average. This responsiveness has earned them 18 customer service awards over the last 10 years.
If your identity is stolen, Iris will collect some details about your situation in order to provide assistance; no personal information is shared between Iris and DuckDuckGo. Once a case is established, Iris has several ways to help get you back on track:
Learn more about Identity Theft Restoration in our Help Pages. Features vary by region.
Ready to give Privacy Pro a try? Make sure you’ve got the latest version of the DuckDuckGo browser (iOS / Android / macOS / Windows) and head to duckduckgo.com/pro.
Privacy Pro is available for $9.99 USD/month or $99.99 USD/year in the U.S., and can be purchased through the Apple App Store, Google Play Store, or on the web via Stripe. Subscribers in the U.K., E.U., and Canada can sign up via the Apple App Store and Google Play Store only; international pricing details here. Your subscription will auto-renew monthly or annually, depending on the payment terms selected, until canceled. If you subscribed via the Apple App Store or Google Play Store, you can manage your subscription and payment methods there. If you subscribed via our website, you’ll manage your account from the DuckDuckGo browser’s Settings instead.
Note: This blog post has been edited since initial publication to stay up to date with our evolving product offerings.

Have you been waiting to try the DuckDuckGo browser? Maybe you’re using our browser on your phone but haven’t tried the Windows or Mac version? Now is the perfect time to make DuckDuckGo the default browser on all your devices, thanks to our latest improvement: Sync & Backup. You could already import bookmarks and passwords from other browsers into DuckDuckGo, but now you can privately sync those bookmarks and passwords between DuckDuckGo browsers on multiple devices.
When you use Chrome, there’s a good chance you’re signed in with your Google account – because they’re constantly pressuring you to do so! There is a convenience in that; all your bookmarks, passwords, and favorites follow you wherever you browse, whether you’re using your computer, phone, or tablet. But there’s a problem. This also gives Google implicit permission to collect even more data about your browsing activity than they would otherwise have and use it for targeted advertising that can follow you around.
At DuckDuckGo, we don’t track you; that’s our privacy policy in a nutshell. We’ve developed our privacy-respecting import and sync functions without requiring a DuckDuckGo account – and without compromising your personal data.
Our built-in password manager stores and encrypts your passwords locally on your device. Our private sync is end-to-end encrypted. (When you use private sync, your data stays securely encrypted throughout the syncing process, because the unique key needed to decrypt it is stored only on your devices.) Your passwords are completely inaccessible to anyone but you. That includes us: DuckDuckGo cannot access your data at any time.
The first step is to download our free browser on one or more devices. (The feature works across most Windows, Mac, Android, and iPhone devices – if you’ve got our browser, you can use Sync & Backup!) If you’re already using the browser, check that it’s up to date. Next, head to the browser’s Settings, choose Sync & Backup > Sync With Another Device and follow the instructions under Begin Syncing.
If you’re on a mobile phone or tablet, you can link devices with a QR code; on desktop computers, you’ll manually enter an alphanumeric code.

Sync passwords and bookmarks between devices by scanning a QR code or manually entering a unique alphanumeric code – no signing in necessary.
Only working with one device? Choose Sync and Back Up This Device from the “Single-Device Setup” section. Once your sync is complete, you can see a list of all your synced devices, edit device nicknames, and fine-tune your settings.

See a list of your synced devices – and add new ones! – under your browser’s Settings > Sync & Back Up.
Once you’re set up, you’ll want to save your Recovery PDF in a secure place. This document contains your Recovery Code, a unique code that will let you access your synced data if your devices are lost or damaged. This is especially important because of our secure end-to-end encryption; your Recovery Code contains the unique, locally generated encryption key that keeps your data private from everyone – including us! If you lose your devices, your Recovery Code is the only way to access your data from a new phone or computer.

With your Recovery Code, you can restore bookmarks, favorites, and other DuckDuckGo settings on a replacement device if yours is lost or damaged.
The DuckDuckGo browser comes with the features you expect from a go-to browser – it even banishes any ads we find that run on creepy trackers, without the need for an outside ad blocker. It also handles cookie pop-ups for you where we can. Plus, over a dozen powerful privacy protections not offered in most popular browsers by default. This uniquely comprehensive set of privacy protections helps protect your online activities, from searching to browsing, emailing, and more.
Our privacy protections work without you having to know anything about the technical details or deal with complicated settings. Just switch your browser to DuckDuckGo across all your devices, and you’ll get privacy by default.
For more detailed instructions on how to use the new sync function – or to peek under the hood of any of DuckDuckGo’s privacy protections! – you can find more information on our Help Pages.

At DuckDuckGo, our vision is to raise the standard of trust online. We also care about our impact offline, so we've stepped up to do our part in the climate crisis. We have already been doing what we can to minimize our carbon footprint, including using sustainable energy to power our servers and being a fully distributed company. We’re proud that, as of 2020, DuckDuckGo is carbon negative dating back to our founding in 2008.
When we set out to do this, we quickly realized there wasn’t much guidance for companies like ours that have 100% distributed teams and provide non-physical goods and services. We hope our experience figuring this out can be a reference guide for similar organizations. Here’s the summary:
We set out to calculate our carbon footprint using the commonly used Greenhouse Gas Protocol. The Protocol groups emissions into three “scopes” and additional activities:
Many companies who claim they are “carbon neutral” are often only looking at their Scope 1 or Scope 1 and 2 emissions, even though Scope 3 and Full Upstream/Downstream Activities are often where the vast majority of emissions take place—especially for organizations not producing or processing physical goods.
In addition, many organizations only look at activities where clear guidelines have been defined (e.g., air travel), but ignore areas where there are no guidelines (e.g., impact of marketing, home offices, etc.), even if much of the organization’s carbon emissions are the result of these activities.
At DuckDuckGo, we didn't think the standard went far enough, so we redefined our approach to make us responsible for all emissions we cause that are not already net zero, regardless of their categorization (or lack thereof).
To estimate our emissions, we pulled together leading source material from environmental agencies around the world including the UK DEFRA / DEEC 2012 GHG Conversion Factors for Company Reporting, the EPA's 2018 Emission Factors for Greenhouse Gas Inventories Report, the BEIS' 2019 Government Greenhouse Gas Conversion Factors for Company Reporting Methodology Paper, and the Environmental Commission of Ontario's 2019 Climate Pollution Report. From here, we mapped out the carbon footprint of every single transaction on our books for the entire 2019 calendar year (since we started working on this in mid-2020) and used that to build a model to estimate category emissions per accounting transaction. That means every vendor bill and credit card purchase by a team member.
While some transactions fit into standard models developed by government agencies (e.g., air travel), it turned out that to our knowledge, no one in government had ever calculated the carbon emissions of an online display advertisement. So, in cases where there was no standard model—or where we felt a standard model clearly under-estimated the actual carbon footprint—we developed our own formulas.
We then surveyed our team to better understand their home-office/co-working situations, including the hardware and software they use, their work-related transit, and recorded all this usage as if it were regular direct Scope 1 emissions.
This led to us estimating some currently unorthodox emissions including:
Lastly, we checked the sustainability programs of every single vendor we used in any capacity. Where one couldn't be identified, or where the program clearly failed to account for 100% of their carbon emissions, we recorded the full CO2e emissions from those transactions as our own.
In the end, our estimate for our 2019 emissions — including Scope 1, 2, 3, and Full Upstream/Downstream Activities — totaled 1,075T of CO2e. That works out to an average of 14.33T of CO2e/per year/per full-time team member. We used that figure to calculate a total of 5,875T of CO2e for the entire existence of DuckDuckGo, from our 2008 founding through 2020.
Once we felt our carbon emissions were properly estimated, we set out to understand how we could properly achieve net zero emissions in a way that would:
After an extensive review of our options, we first partnered with GoldStandard.org, an international non-profit foundation that is focused on reducing carbon emissions through sustainable investment in carbon reduction projects that also help improve the lives of those involved. Those projects included:
Current partner CNaught’s projects are similarly distributed across five categories ranging from emissions reductions to conservation and long-lived removal. You can learn more about each category, including example projects, on the CNaught website.
We're proud that DuckDuckGo is not only achieving net zero emissions, but doing so in a way that we hope will have a transformative and on-going impact around the world, creating jobs and improving the health and quality of life for many.
Addressing the climate crisis requires us to collectively get to net zero global emissions. We believe doing so will require the use of new technologies at scale, such as physically removing carbon from the atmosphere and sequestering it permanently. Unfortunately, this technology is too expensive right now to make an impact at scale.
In 2020, we were one of the first companies to join Stripe's Climate Program to bring down the cost of this technology by making commitments to fund this new type of carbon reduction. Unlike other carbon reduction methods, Stripe's program required that all carbon removal has a permanence of greater than a thousand years, is directly measured and verifiable, and has a net-negative lifecycle ratio of less than one.
Today, DuckDuckGo is pleased to contribute to carbon removal with Carbonfuture. We have committed that every year, whatever amount of money we spend on CNaught projects, we will make an equal dollar contribution to Carbonfuture to help directly remove carbon from the air – and more importantly, to help pull this technology forward. Visit Carbonfuture’s website to learn more about their rigorous, data-driven approach to carbon removal.
We're committed to doing our part, both online and off. As a DuckDuckGo user, we hope you can rest assured that we are doing our part in the climate crisis. We're now achieving net zero emissions through rigorously measured programs that continue to have a positive environmental and societal impact year after year. We're going carbon negative by funding projects to account for 125% of our emissions, and then doubling that total amount to invest in physically removing carbon from the air to advance this important technology for our future.
Note: This blog post has been edited since initial publication with additional information about our sustainability commitments.
For more privacy advice follow us on Twitter, and stay protected and informed with our privacy newsletter.
DISCLAIMER:
The use of hidden virtual machines (VMs) enables long-term access, credential harvesting, data exfiltration, and PayoutsKing ransomware deployment
Categories: Threat Research
Tags: virtual machine, QEMU, PayoutsKing, GOLD ENCOUNTER, CitrixBleed2
Categories: Threat Research
Tags: advisory, vulnerability, Adobe Reader
<p>Following our article on the challenges posed by agentic AI, we gave OpenClaw access to one of our legacy networks</p>
Categories: Threat Research
Tags: OpenClaw, LLM, AI, penetration testing, Red Team, CISO, Sophos X-Ops
Categories: Threat Research
Tags: advisory, NPM, Axios
<p>A phishing campaign targeting multiple organizations led to RMM installations – but not much else (yet). A threat actor experimenting, or an access-as-a-service attack underway?</p>
Categories: Threat Research
Tags: STAC6405, infostealer, RMM, Phishing
Categories: Threat Research
Tags: advisory, vulnerability, Oracle
Victimizing software developers via fake companies, jobs, and code repositories to steal cryptocurrency
Categories: Threat Research
Tags: NICKEL ALLEY, Contagious Interview, North Korea, clickfix
Keenadu malware gives an attacker control over a device but appears to be used primarily to facilitate ad fraud
Categories: Threat Research
Tags: Android, Keenadu
Eight Critical-severity bugs – none in Windows – appear in 84-CVE haul
Categories: Threat Research
Tags: Patch Tuesday, x-ops, Microsoft, Windows, detection
Analysis of attacks originating from Iran-linked threat groups reveals a preference for certain techniques
Categories: Threat Research
Tags: Iran, initial access
<p>Across three recent campaigns, Sophos X-Ops notes shifts in both lures and malware capabilities, as threat actors leveraging ClickFix techniques increasingly target macOS users with infostealers</p>
Categories: Threat Research
Tags: MacOS, infostealer, clickfix, MacSync, Social engineering
Rising tensions have sparked an increase in regional hacktivist activity, but impact has been minimal
Categories: Threat Research
Tags: hacktivism, Iran, israel, Operation Epic Fury
Categories: Threat Research
Tags: advisory, vulnerability, SD-WAN
<p>AI headline hype didn’t deliver a sea change for practical defense — but one below-the-radar development should</p>
Categories: Security Operations, Threat Research
Tags: Active Adversary, Active Adversary Report
Just 58 CVEs to spar with in February, but plenty are already under attack
Categories: Threat Research, X-ops
Tags: Patch Tuesday, Microsoft, Windows
DISCLAIMER:
Microsoft today pushed software updates to fix a staggering 167 security vulnerabilities in its Windows operating systems and related software, including a SharePoint Server zero-day and a publicly disclosed weakness in Windows Defender dubbed “BlueHammer.” Separately, Google Chrome fixed its fourth zero-day of 2026, and an emergency update for Adobe Reader nixes an actively exploited flaw that can lead to remote code execution.

Redmond warns that attackers are already targeting CVE-2026-32201, a vulnerability in Microsoft SharePoint Server that allows attackers to spoof trusted content or interfaces over a network.
Mike Walters, president and co-founder of Action1, said CVE-2026-32201 can be used to deceive employees, partners, or customers by presenting falsified information within trusted SharePoint environments.
“This CVE can enable phishing attacks, unauthorized data manipulation, or social engineering campaigns that lead to further compromise,” Walters said. “The presence of active exploitation significantly increases organizational risk.”
Microsoft also addressed BlueHammer (CVE-2026-33825), a privilege escalation bug in Windows Defender. According to BleepingComputer, the researcher who discovered the flaw published exploit code for it after notifying Microsoft and growing exasperated with their response. Will Dormann, senior principal vulnerability analyst at Tharros, says he confirmed that the public BlueHammer exploit code no longer works after installing today’s patches.
Satnam Narang, senior staff research engineer at Tenable, said April marks the second-biggest Patch Tuesday ever for Microsoft. Narang also said there are indications that a zero-day flaw Adobe patched in an emergency update on April 11 — CVE-2026-34621 — has seen active exploitation since at least November 2025.
Adam Barnett, lead software engineer at Rapid7, called the patch total from Microsoft today “a new record in that category” because it includes nearly 60 browser vulnerabilities. Barnett said it might be tempting to imagine that this sudden spike was tied to the buzz around the announcement a week ago today of Project Glasswing — a much-hyped but still unreleased new AI capability from Anthropic that is reportedly quite good at finding bugs in a vast array of software.
But he notes that Microsoft Edge is based on the Chromium engine, and the Chromium maintainers acknowledge a wide range of researchers for the vulnerabilities which Microsoft republished last Friday.
“A safe conclusion is that this increase in volume is driven by ever-expanding AI capabilities,” Barnett said. “We should expect to see further increases in vulnerability reporting volume as the impact of AI models extend further, both in terms of capability and availability.”
Finally, no matter what browser you use to surf the web, it’s important to completely close out and restart the browser periodically. This is really easy to put off (especially if you have a bajillion tabs open at any time) but it’s the only way to ensure that any available updates get installed. For example, a Google Chrome update released earlier this month fixed 21 security holes, including the high-severity zero-day flaw CVE-2026-5281.
For a clickable, per-patch breakdown, check out the SANS Internet Storm Center Patch Tuesday roundup. Running into problems applying any of these updates? Leave a note about it in the comments below and there’s a decent chance someone here will pipe in with a solution.
Hackers linked to Russia’s military intelligence units are using known flaws in older Internet routers to mass harvest authentication tokens from Microsoft Office users, security experts warned today. The spying campaign allowed state-backed Russian hackers to quietly siphon authentication tokens from users on more than 18,000 networks without deploying any malicious software or code.
Microsoft said in a blog post today it identified more than 200 organizations and 5,000 consumer devices that were caught up in a stealthy but remarkably simple spying network built by a Russia-backed threat actor known as “Forest Blizzard.”

How targeted DNS requests were redirected at the router. Image: Black Lotus Labs.
Also known as APT28 and Fancy Bear, Forest Blizzard is attributed to the military intelligence units within Russia’s General Staff Main Intelligence Directorate (GRU). APT 28 famously compromised the Hillary Clinton campaign, the Democratic National Committee, and the Democratic Congressional Campaign Committee in 2016 in an attempt to interfere with the U.S. presidential election.
Researchers at Black Lotus Labs, a security division of the Internet backbone provider Lumen, found that at the peak of its activity in December 2025, Forest Blizzard’s surveillance dragnet ensnared more than 18,000 Internet routers that were mostly unsupported, end-of-life routers, or else far behind on security updates. A new report from Lumen says the hackers primarily targeted government agencies—including ministries of foreign affairs, law enforcement, and third-party email providers.
Black Lotus Security Engineer Ryan English said the GRU hackers did not need to install malware on the targeted routers, which were mainly older Mikrotik and TP-Link devices marketed to the Small Office/Home Office (SOHO) market. Instead, they used known vulnerabilities to modify the Domain Name System (DNS) settings of the routers to include DNS servers controlled by the hackers.
As the U.K.’s National Cyber Security Centre (NCSC) notes in a new advisory detailing how Russian cyber actors have been compromising routers, DNS is what allows individuals to reach websites by typing familiar addresses, instead of associated IP addresses. In a DNS hijacking attack, bad actors interfere with this process to covertly send users to malicious websites designed to steal login details or other sensitive information.
English said the routers attacked by Forest Blizzard were reconfigured to use DNS servers that pointed to a handful of virtual private servers controlled by the attackers. Importantly, the attackers could then propagate their malicious DNS settings to all users on the local network, and from that point forward intercept any OAuth authentication tokens transmitted by those users.

DNS hijacking through router compromise. Image: Microsoft.
Because those tokens are typically transmitted only after the user has successfully logged in and gone through multi-factor authentication, the attackers could gain direct access to victim accounts without ever having to phish each user’s credentials and/or one-time codes.
“Everyone is looking for some sophisticated malware to drop something on your mobile devices or something,” English said. “These guys didn’t use malware. They did this in an old-school, graybeard way that isn’t really sexy but it gets the job done.”
Microsoft refers to the Forest Blizzard activity as using DNS hijacking “to support post-compromise adversary-in-the-middle (AiTM) attacks on Transport Layer Security (TLS) connections against Microsoft Outlook on the web domains.” The software giant said while targeting SOHO devices isn’t a new tactic, this is the first time Microsoft has seen Forest Blizzard using “DNS hijacking at scale to support AiTM of TLS connections after exploiting edge devices.”
Black Lotus Labs engineer Danny Adamitis said it will be interesting to see how Forest Blizzard reacts to today’s flurry of attention to their espionage operation, noting that the group immediately switched up its tactics in response to a similar NCSC report (PDF) in August 2025. At the time, Forest Blizzard was using malware to control a far more targeted and smaller group of compromised routers. But Adamitis said the day after the NCSC report, the group quickly ditched the malware approach in favor of mass-altering the DNS settings on thousands of vulnerable routers.
“Before the last NCSC report came out they used this capability in very limited instances,” Adamitis told KrebsOnSecurity. “After the report was released they implemented the capability in a more systemic fashion and used it to target everything that was vulnerable.”
TP-Link was among the router makers facing a complete ban in the United States. But on March 23, the U.S. Federal Communications Commission (FCC) took a much broader approach, announcing it would no longer certify consumer-grade Internet routers that are produced outside of the United States.
The FCC warned that foreign-made routers had become an untenable national security threat, and that poorly-secured routers present “a severe cybersecurity risk that could be leveraged to immediately and severely disrupt U.S. critical infrastructure and directly harm U.S. persons.”
Experts have countered that few new consumer-grade routers would be available for purchase under this new FCC policy (besides maybe Musk’s Starlink satellite Internet routers, which are produced in Texas). The FCC says router makers can apply for a special “conditional approval” from the Department of War or Department of Homeland Security, and that the new policy does not affect any previously-purchased consumer-grade routers.
An elusive hacker who went by the handle “UNKN” and ran the early Russian ransomware groups GandCrab and REvil now has a name and a face. Authorities in Germany say 31-year-old Russian Daniil Maksimovich Shchukin headed both cybercrime gangs and helped carry out at least 130 acts of computer sabotage and extortion against victims across the country between 2019 and 2021.
Shchukin was named as UNKN (a.k.a. UNKNOWN) in an advisory published by the German Federal Criminal Police (the “Bundeskriminalamt” or BKA for short). The BKA said Shchukin and another Russian — 43-year-old Anatoly Sergeevitsch Kravchuk — extorted nearly $2 million euros across two dozen cyberattacks that caused more than 35 million euros in total economic damage.

Daniil Maksimovich SHCHUKIN, a.k.a. UNKN, and Anatoly Sergeevitsch Karvchuk, alleged leaders of the GandCrab and REvil ransomware groups.
Germany’s BKA said Shchukin acted as the head of one of the largest worldwide operating ransomware groups GandCrab and REvil, which pioneered the practice of double extortion — charging victims once for a key needed to unlock hacked systems, and a separate payment in exchange for a promise not to publish stolen data.
Shchukin’s name appeared in a Feb. 2023 filing (PDF) from the U.S. Justice Department seeking the seizure of various cryptocurrency accounts associated with proceeds from the REvil ransomware gang’s activities. The government said the digital wallet tied to Shchukin contained more than $317,000 in ill-gotten cryptocurrency.
The GandCrab ransomware affiliate program first surfaced in January 2018, and paid enterprising hackers huge shares of the profits just for hacking into user accounts at major corporations. The GandCrab team would then try to expand that access, often siphoning vast amounts of sensitive and internal documents in the process. The malware’s curators shipped five major revisions to the GandCrab code, each corresponding with sneaky new features and bug fixes aimed at thwarting the efforts of computer security firms to stymie the spread of the malware.
On May 31, 2019, the GandCrab team announced the group was shutting down after extorting more than $2 billion from victims. “We are a living proof that you can do evil and get off scot-free,” GandCrab’s farewell address famously quipped. “We have proved that one can make a lifetime of money in one year. We have proved that you can become number one by general admission, not in your own conceit.”
The REvil ransomware affiliate program materialized around the same as GandCrab’s demise, fronted by a user named UNKNOWN who announced on a Russian cybercrime forum that he’d deposited $1 million in the forum’s escrow to show he meant business. By this time, many cybersecurity experts had concluded REvil was little more than a reorganization of GandCrab.
UNKNOWN also gave an interview to Dmitry Smilyanets, a former malicious hacker hired by Recorded Future, wherein UNKNOWN described a rags-to-riches tale unencumbered by ethics and morals.
“As a child, I scrounged through the trash heaps and smoked cigarette butts,” UNKNOWN told Recorded Future. “I walked 10 km one way to the school. I wore the same clothes for six months. In my youth, in a communal apartment, I didn’t eat for two or even three days. Now I am a millionaire.”
As described in The Ransomware Hunting Team by Renee Dudley and Daniel Golden, UNKNOWN and REvil reinvested significant earnings into improving their success and mirroring practices of legitimate businesses. The authors wrote:
“Just as a real-world manufacturer might hire other companies to handle logistics or web design, ransomware developers increasingly outsourced tasks beyond their purview, focusing instead on improving the quality of their ransomware. The higher quality ransomware—which, in many cases, the Hunting Team could not break—resulted in more and higher pay-outs from victims. The monumental payments enabled gangs to reinvest in their enterprises. They hired more specialists, and their success accelerated.”
“Criminals raced to join the booming ransomware economy. Underworld ancillary service providers sprouted or pivoted from other criminal work to meet developers’ demand for customized support. Partnering with gangs like GandCrab, ‘cryptor’ providers ensured ransomware could not be detected by standard anti-malware scanners. ‘Initial access brokerages’ specialized in stealing credentials and finding vulnerabilities in target networks, selling that access to ransomware operators and affiliates. Bitcoin “tumblers” offered discounts to gangs that used them as a preferred vendor for laundering ransom payments. Some contractors were open to working with any gang, while others entered exclusive partnerships.”
REvil would evolve into a feared “big-game-hunting” machine capable of extracting hefty extortion payments from victims, largely going after organizations with more than $100 million in annual revenues and fat new cyber insurance policies that were known to pay out.
Over the July 4, 2021 weekend in the United States, REvil hacked into and extorted Kaseya, a company that handled IT operations for more than 1,500 businesses, nonprofits and government agencies. The FBI would later announce they’d infiltrated the ransomware group’s servers prior to the Kaseya hack but couldn’t tip their hand at the time. REvil never recovered from that core compromise, or from the FBI’s release of a free decryption key for REvil victims who couldn’t or didn’t pay.
Shchukin is from Krasnodar, Russia and is thought to reside there, the BKA said.
“Based on the investigations so far, it is assumed that the wanted person is abroad, presumably in Russia,” the BKA advised. “Travel behaviour cannot be ruled out.”
There is little that connects Shchukin to UNKNOWN’s various accounts on the Russian crime forums. But a review of the Russian crime forums indexed by the cyber intelligence firm Intel 471 shows there is plenty connecting Shchukin to a hacker identity called “Ger0in” who operated large botnets and sold “installs” — allowing other cybercriminals to rapidly deploy malware of their choice to thousands of PCs in one go. However, Ger0in was only active between 2010 and 2011, well before UNKNOWN’s appearance as the REvil front man.
A review of the mugshots released by the BKA at the image comparison site Pimeyes found a match on this birthday celebration from 2023, which features a young man named Daniel wearing the same fancy watch as in the BKA photos.
Update, April 6, 12:06 p.m. ET: A reader forwarded this English-dubbed audio recording from a ccc.de (37C3) conference talk in Germany from 2023 that previously outed Shchukin as the REvil leader (Shchuckin is mentioned at around 24:25).
A financially motivated data theft and extortion group is attempting to inject itself into the Iran war, unleashing a worm that spreads through poorly secured cloud services and wipes data on infected systems that use Iran’s time zone or have Farsi set as the default language.
Experts say the wiper campaign against Iran materialized this past weekend and came from a relatively new cybercrime group known as TeamPCP. In December 2025, the group began compromising corporate cloud environments using a self-propagating worm that went after exposed Docker APIs, Kubernetes clusters, Redis servers, and the React2Shell vulnerability. TeamPCP then attempted to move laterally through victim networks, siphoning authentication credentials and extorting victims over Telegram.

A snippet of the malicious CanisterWorm that seeks out and destroys data on systems that match Iran’s timezone or have Farsi as the default language. Image: Aikido.dev.
In a profile of TeamPCP published in January, the security firm Flare said the group weaponizes exposed control planes rather than exploiting endpoints, predominantly targeting cloud infrastructure over end-user devices, with Azure (61%) and AWS (36%) accounting for 97% of compromised servers.
“TeamPCP’s strength does not come from novel exploits or original malware, but from the large-scale automation and integration of well-known attack techniques,” Flare’s Assaf Morag wrote. “The group industrializes existing vulnerabilities, misconfigurations, and recycled tooling into a cloud-native exploitation platform that turns exposed infrastructure into a self-propagating criminal ecosystem.”
On March 19, TeamPCP executed a supply chain attack against the vulnerability scanner Trivy from Aqua Security, injecting credential-stealing malware into official releases on GitHub actions. Aqua Security said it has since removed the harmful files, but the security firm Wiz notes the attackers were able to publish malicious versions that snarfed SSH keys, cloud credentials, Kubernetes tokens and cryptocurrency wallets from users.
Over the weekend, the same technical infrastructure TeamPCP used in the Trivy attack was leveraged to deploy a new malicious payload which executes a wiper attack if the user’s timezone and locale are determined to correspond to Iran, said Charlie Eriksen, a security researcher at Aikido. In a blog post published on Sunday, Eriksen said if the wiper component detects that the victim is in Iran and has access to a Kubernetes cluster, it will destroy data on every node in that cluster.
“If it doesn’t it will just wipe the local machine,” Eriksen told KrebsOnSecurity.

Image: Aikido.dev.
Aikido refers to TeamPCP’s infrastructure as “CanisterWorm” because the group orchestrates their campaigns using an Internet Computer Protocol (ICP) canister — a system of tamperproof, blockchain-based “smart contracts” that combine both code and data. ICP canisters can serve Web content directly to visitors, and their distributed architecture makes them resistant to takedown attempts. These canisters will remain reachable so long as their operators continue to pay virtual currency fees to keep them online.
Eriksen said the people behind TeamPCP are bragging about their exploits in a group on Telegram and claim to have used the worm to steal vast amounts of sensitive data from major companies, including a large multinational pharmaceutical firm.
“When they compromised Aqua a second time, they took a lot of GitHub accounts and started spamming these with junk messages,” Eriksen said. “It was almost like they were just showing off how much access they had. Clearly, they have an entire stash of these credentials, and what we’ve seen so far is probably a small sample of what they have.”
Security experts say the spammed GitHub messages could be a way for TeamPCP to ensure that any code packages tainted with their malware will remain prominent in GitHub searches. In a newsletter published today titled GitHub is Starting to Have a Real Malware Problem, Risky Business reporter Catalin Cimpanu writes that attackers often are seen pushing meaningless commits to their repos or using online services that sell GitHub stars and “likes” to keep malicious packages at the top of the GitHub search page.
This weekend’s outbreak is the second major supply chain attack involving Trivy in as many months. At the end of February, Trivy was hit as part of an automated threat called HackerBot-Claw, which mass exploited misconfigured workflows in GitHub Actions to steal authentication tokens.
Eriksen said it appears TeamPCP used access gained in the first attack on Aqua Security to perpetrate this weekend’s mischief. But he said there is no reliable way to tell whether TeamPCP’s wiper actually succeeded in trashing any data from victim systems, and that the malicious payload was only active for a short time over the weekend.
“They’ve been taking [the malicious code] up and down, rapidly changing it adding new features,” Eriksen said, noting that when the malicious canister wasn’t serving up malware downloads it was pointing visitors to a Rick Roll video on YouTube.
“It’s a little all over the place, and there’s a chance this whole Iran thing is just their way of getting attention,” Eriksen said. “I feel like these people are really playing this Chaotic Evil role here.”
Cimpanu observed that supply chain attacks have increased in frequency of late as threat actors begin to grasp just how efficient they can be, and his post documents an alarming number of these incidents since 2024.
“While security firms appear to be doing a good job spotting this, we’re also gonna need GitHub’s security team to step up,” Cimpanu wrote. “Unfortunately, on a platform designed to copy (fork) a project and create new versions of it (clones), spotting malicious additions to clones of legitimate repos might be quite the engineering problem to fix.”
Update, 2:40 p.m. ET: Wiz is reporting that TeamPCP also pushed credential stealing malware to the KICS vulnerability scanner from Checkmarx, and that the scanner’s GitHub Action was compromised between 12:58 and 16:50 UTC today (March 23rd).
The U.S. Justice Department joined authorities in Canada and Germany in dismantling the online infrastructure behind four highly disruptive botnets that compromised more than three million Internet of Things (IoT) devices, such as routers and web cameras. The feds say the four botnets — named Aisuru, Kimwolf, JackSkid and Mossad — are responsible for a series of recent record-smashing distributed denial-of-service (DDoS) attacks capable of knocking nearly any target offline.

Image: Shutterstock, @Elzicon.
The Justice Department said the Department of Defense Office of Inspector General’s (DoDIG) Defense Criminal Investigative Service (DCIS) executed seizure warrants targeting multiple U.S.-registered domains, virtual servers, and other infrastructure involved in DDoS attacks against Internet addresses owned by the DoD.
The government alleges the unnamed people in control of the four botnets used their crime machines to launch hundreds of thousands of DDoS attacks, often demanding extortion payments from victims. Some victims reported tens of thousands of dollars in losses and remediation expenses.
The oldest of the botnets — Aisuru — issued more than 200,000 attacks commands, while JackSkid hurled at least 90,000 attacks. Kimwolf issued more than 25,000 attack commands, the government said, while Mossad was blamed for roughy 1,000 digital sieges.
The DOJ said the law enforcement action was designed to prevent further infection to victim devices and to limit or eliminate the ability of the botnets to launch future attacks. The case is being investigated by the DCIS with help from the FBI’s field office in Anchorage, Alaska, and the DOJ’s statement credits nearly two dozen technology companies with assisting in the operation.
“By working closely with DCIS and our international law enforcement partners, we collectively identified and disrupted criminal infrastructure used to carry out large-scale DDoS attacks,” said Special Agent in Charge Rebecca Day of the FBI Anchorage Field Office.
Aisuru emerged in late 2024, and by mid-2025 it was launching record-breaking DDoS attacks as it rapidly infected new IoT devices. In October 2025, Aisuru was used to seed Kimwolf, an Aisuru variant which introduced a novel spreading mechanism that allowed the botnet to infect devices hidden behind the protection of the user’s internal network.
On January 2, 2026, the security firm Synthient publicly disclosed the vulnerability Kimwolf was using to propagate so quickly. That disclosure helped curtail Kimwolf’s spread somewhat, but since then several other IoT botnets have emerged that effectively copy Kimwolf’s spreading methods while competing for the same pool of vulnerable devices. According to the DOJ, the JackSkid botnet also sought out systems on internal networks just like Kimwolf.
The DOJ said its disruption of the four botnets coincided with “law enforcement actions” conducted in Canada and Germany targeting individuals who allegedly operated those botnets, although no further details were available on the suspected operators.
In late February, KrebsOnSecurity identified a 22-year-old Canadian man as a core operator of the Kimwolf botnet. Multiple sources familiar with the investigation told KrebsOnSecurity the other prime suspect is a 15-year-old living in Germany.
A hacktivist group with links to Iran’s intelligence agencies is claiming responsibility for a data-wiping attack against Stryker, a global medical technology company based in Michigan. News reports out of Ireland, Stryker’s largest hub outside of the United States, said the company sent home more than 5,000 workers there today. Meanwhile, a voicemail message at Stryker’s main U.S. headquarters says the company is currently experiencing a building emergency.
Based in Kalamazoo, Michigan, Stryker [NYSE:SYK] is a medical and surgical equipment maker that reported $25 billion in global sales last year. In a lengthy statement posted to Telegram, a hacktivist group known as Handala (a.k.a. Handala Hack Team) claimed that Stryker’s offices in 79 countries have been forced to shut down after the group erased data from more than 200,000 systems, servers and mobile devices.

A manifesto posted by the Iran-backed hacktivist group Handala, claiming a mass data-wiping attack against medical technology maker Stryker.
“All the acquired data is now in the hands of the free people of the world, ready to be used for the true advancement of humanity and the exposure of injustice and corruption,” a portion of the Handala statement reads.
The group said the wiper attack was in retaliation for a Feb. 28 missile strike that hit an Iranian school and killed at least 175 people, most of them children. The New York Times reports today that an ongoing military investigation has determined the United States is responsible for the deadly Tomahawk missile strike.
Handala was one of several hacker groups recently profiled by Palo Alto Networks, which links it to Iran’s Ministry of Intelligence and Security (MOIS). Palo Alto says Handala surfaced in late 2023 and is assessed as one of several online personas maintained by Void Manticore, a MOIS-affiliated actor.
Stryker’s website says the company has 56,000 employees in 61 countries. A phone call placed Wednesday morning to the media line at Stryker’s Michigan headquarters sent this author to a voicemail message that stated, “We are currently experiencing a building emergency. Please try your call again later.”
A report Wednesday morning from the Irish Examiner said Stryker staff are now communicating via WhatsApp for any updates on when they can return to work. The story quoted an unnamed employee saying anything connected to the network is down, and that “anyone with Microsoft Outlook on their personal phones had their devices wiped.”
“Multiple sources have said that systems in the Cork headquarters have been ‘shut down’ and that Stryker devices held by employees have been wiped out,” the Examiner reported. “The login pages coming up on these devices have been defaced with the Handala logo.”
Wiper attacks usually involve malicious software designed to overwrite any existing data on infected devices. But a trusted source with knowledge of the attack who spoke on condition of anonymity told KrebsOnSecurity the perpetrators in this case appear to have used a Microsoft service called Microsoft Intune to issue a ‘remote wipe’ command against all connected devices.
Intune is a cloud-based solution built for IT teams to enforce security and data compliance policies, and it provides a single, web-based administrative console to monitor and control devices regardless of location. The Intune connection is supported by this Reddit discussion on the Stryker outage, where several users who claimed to be Stryker employees said they were told to uninstall Intune urgently.
Palo Alto says Handala’s hack-and-leak activity is primarily focused on Israel, with occasional targeting outside that scope when it serves a specific agenda. The security firm said Handala also has taken credit for recent attacks against fuel systems in Jordan and an Israeli energy exploration company.
“Recent observed activities are opportunistic and ‘quick and dirty,’ with a noticeable focus on supply-chain footholds (e.g., IT/service providers) to reach downstream victims, followed by ‘proof’ posts to amplify credibility and intimidate targets,” Palo Alto researchers wrote.
The Handala manifesto posted to Telegram referred to Stryker as a “Zionist-rooted corporation,” which may be a reference to the company’s 2019 acquisition of the Israeli company OrthoSpace.
Stryker is a major supplier of medical devices, and the ongoing attack is already affecting healthcare providers. One healthcare professional at a major university medical system in the United States told KrebsOnSecurity they are currently unable to order surgical supplies that they normally source through Stryker.
“This is a real-world supply chain attack,” the expert said, who asked to remain anonymous because they were not authorized to speak to the press. “Pretty much every hospital in the U.S. that performs surgeries uses their supplies.”
John Riggi, national advisor for the American Hospital Association (AHA), said the AHA is not aware of any supply-chain disruptions as of yet.
“We are aware of reports of the cyber attack against Stryker and are actively exchanging information with the hospital field and the federal government to understand the nature of the threat and assess any impact to hospital operations,” Riggi said in an email. “As of this time, we are not aware of any direct impacts or disruptions to U.S. hospitals as a result of this attack. That may change as hospitals evaluate services, technology and supply chain related to Stryker and if the duration of the attack extends.”
According to a March 11 memo from the state of Maryland’s Institute for Emergency Medical Services Systems, Stryker indicated that some of their computer systems have been impacted by a “global network disruption.” The memo indicates that in response to the attack, a number of hospitals have opted to disconnect from Stryker’s various online services, including LifeNet, which allows paramedics to transmit EKGs to emergency physicians so that heart attack patients can expedite their treatment when they arrive at the hospital.
“As a precaution, some hospitals have temporarily suspended their connection to Stryker systems, including LIFENET, while others have maintained the connection,” wrote Timothy Chizmar, the state’s EMS medical director. “The Maryland Medical Protocols for EMS requires ECG transmission for patients with acute coronary syndrome (or STEMI). However, if you are unable to transmit a 12 Lead ECG to a receiving hospital, you should initiate radio consultation and describe the findings on the ECG.”
This is a developing story. Updates will be noted with a timestamp.
Update, 2:54 p.m. ET: Added comment from Riggi and perspectives on this attack’s potential to turn into a supply-chain problem for the healthcare system.
Update, Mar. 12, 7:59 a.m. ET: Added information about the outage affecting Stryker’s online services.
Microsoft Corp. today pushed security updates to fix at least 77 vulnerabilities in its Windows operating systems and other software. There are no pressing “zero-day” flaws this month (compared to February’s five zero-day treat), but as usual some patches may deserve more rapid attention from organizations using Windows. Here are a few highlights from this month’s Patch Tuesday.

Image: Shutterstock, @nwz.
Two of the bugs Microsoft patched today were publicly disclosed previously. CVE-2026-21262 is a weakness that allows an attacker to elevate their privileges on SQL Server 2016 and later editions.
“This isn’t just any elevation of privilege vulnerability, either; the advisory notes that an authorized attacker can elevate privileges to sysadmin over a network,” Rapid7’s Adam Barnett said. “The CVSS v3 base score of 8.8 is just below the threshold for critical severity, since low-level privileges are required. It would be a courageous defender who shrugged and deferred the patches for this one.”
The other publicly disclosed flaw is CVE-2026-26127, a vulnerability in applications running on .NET. Barnett said the immediate impact of exploitation is likely limited to denial of service by triggering a crash, with the potential for other types of attacks during a service reboot.
It would hardly be a proper Patch Tuesday without at least one critical Microsoft Office exploit, and this month doesn’t disappoint. CVE-2026-26113 and CVE-2026-26110 are both remote code execution flaws that can be triggered just by viewing a booby-trapped message in the Preview Pane.
Satnam Narang at Tenable notes that just over half (55%) of all Patch Tuesday CVEs this month are privilege escalation bugs, and of those, a half dozen were rated “exploitation more likely” — across Windows Graphics Component, Windows Accessibility Infrastructure, Windows Kernel, Windows SMB Server and Winlogon. These include:
–CVE-2026-24291: Incorrect permission assignments within the Windows Accessibility Infrastructure to reach SYSTEM (CVSS 7.8)
–CVE-2026-24294: Improper authentication in the core SMB component (CVSS 7.8)
–CVE-2026-24289: High-severity memory corruption and race condition flaw (CVSS 7.8)
–CVE-2026-25187: Winlogon process weakness discovered by Google Project Zero (CVSS 7.8).
Ben McCarthy, lead cyber security engineer at Immersive, called attention to CVE-2026-21536, a critical remote code execution bug in a component called the Microsoft Devices Pricing Program. Microsoft has already resolved the issue on their end, and fixing it requires no action on the part of Windows users. But McCarthy says it’s notable as one of the first vulnerabilities identified by an AI agent and officially recognized with a CVE attributed to the Windows operating system. It was discovered by XBOW, a fully autonomous AI penetration testing agent.
XBOW has consistently ranked at or near the top of the Hacker One bug bounty leaderboard for the past year. McCarthy said CVE-2026-21536 demonstrates how AI agents can identify critical 9.8-rated vulnerabilities without access to source code.
“Although Microsoft has already patched and mitigated the vulnerability, it highlights a shift toward AI-driven discovery of complex vulnerabilities at increasing speed,” McCarthy said. “This development suggests AI-assisted vulnerability research will play a growing role in the security landscape.”
Microsoft earlier provided patches to address nine browser vulnerabilities, which are not included in the Patch Tuesday count above. In addition, Microsoft issued a crucial out-of-band (emergency) update on March 2 for Windows Server 2022 to address a certificate renewal issue with passwordless authentication technology Windows Hello for Business.
Separately, Adobe shipped updates to fix 80 vulnerabilities — some of them critical in severity — in a variety of products, including Acrobat and Adobe Commerce. Mozilla Firefox v. 148.0.2 resolves three high severity CVEs.
For a complete breakdown of all the patches Microsoft released today, check out the SANS Internet Storm Center’s Patch Tuesday post. Windows enterprise admins who wish to stay abreast of any news about problematic updates, AskWoody.com is always worth a visit. Please feel free to drop a comment below if you experience any issues apply this month’s patches.
AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.
The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.
If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.
Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.
“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”
You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.
“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.
There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.
Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.
With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.
“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”
O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.
One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.
A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.
According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.
“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.
“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”
AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.
Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.
Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.
“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”
The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.
AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.
“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”
“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”
For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.
“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”
This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.
“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”
One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.
“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.
As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.
The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.
“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”
DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.
“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”
In early January 2026, KrebsOnSecurity revealed how a security researcher disclosed a vulnerability that was used to build Kimwolf, the world’s largest and most disruptive botnet. Since then, the person in control of Kimwolf — who goes by the handle “Dort” — has coordinated a barrage of distributed denial-of-service (DDoS), doxing and email flooding attacks against the researcher and this author, and more recently caused a SWAT team to be sent to the researcher’s home. This post examines what is knowable about Dort based on public information.
A public “dox” created in 2020 asserted Dort was a teenager from Canada (DOB August 2003) who used the aliases “CPacket” and “M1ce.” A search on the username CPacket at the open source intelligence platform OSINT Industries finds a GitHub account under the names Dort and CPacket that was created in 2017 using the email address jay.miner232@gmail.com.

Image: osint.industries.
The cyber intelligence firm Intel 471 says jay.miner232@gmail.com was used between 2015 and 2019 to create accounts at multiple cybercrime forums, including Nulled (username “Uubuntuu”) and Cracked (user “Dorted”); Intel 471 reports that both of these accounts were created from the same Internet address at Rogers Canada (99.241.112.24).
Dort was an extremely active player in the Microsoft game Minecraft who gained notoriety for their “Dortware” software that helped players cheat. But somewhere along the way, Dort graduated from hacking Minecraft games to enabling far more serious crimes.
Dort also used the nickname DortDev, an identity that was active in March 2022 on the chat server for the prolific cybercrime group known as LAPSUS$. Dort peddled a service for registering temporary email addresses, as well as “Dortsolver,” code that could bypass various CAPTCHA services designed to prevent automated account abuse. Both of these offerings were advertised in 2022 on SIM Land, a Telegram channel dedicated to SIM-swapping and account takeover activity.
The cyber intelligence firm Flashpoint indexed 2022 posts on SIM Land by Dort that show this person developed the disposable email and CAPTCHA bypass services with the help of another hacker who went by the handle “Qoft.”
“I legit just work with Jacob,” Qoft said in 2022 in reply to another user, referring to their exclusive business partner Dort. In the same conversation, Qoft bragged that the two had stolen more than $250,000 worth of Microsoft Xbox Game Pass accounts by developing a program that mass-created Game Pass identities using stolen payment card data.
Who is the Jacob that Qoft referred to as their business partner? The breach tracking service Constella Intelligence finds the password used by jay.miner232@gmail.com was reused by just one other email address: jacobbutler803@gmail.com. Recall that the 2020 dox of Dort said their date of birth was August 2003 (8/03).
Searching this email address at DomainTools.com reveals it was used in 2015 to register several Minecraft-themed domains, all assigned to a Jacob Butler in Ottawa, Canada and to the Ottawa phone number 613-909-9727.
Constella Intelligence finds jacobbutler803@gmail.com was used to register an account on the hacker forum Nulled in 2016, as well as the account name “M1CE” on Minecraft. Pivoting off the password used by their Nulled account shows it was shared by the email addresses j.a.y.m.iner232@gmail.com and jbutl3@ocdsb.ca, the latter being an address at a domain for the Ottawa-Carelton District School Board.
Data indexed by the breach tracking service Spycloud suggests that at one point Jacob Butler shared a computer with his mother and a sibling, which might explain why their email accounts were connected to the password “jacobsplugs.” Neither Jacob nor any of the other Butler household members responded to requests for comment.
The open source intelligence service Epieos finds jacobbutler803@gmail.com created the GitHub account “MemeClient.” Meanwhile, Flashpoint indexed a deleted anonymous Pastebin.com post from 2017 declaring that MemeClient was the creation of a user named CPacket — one of Dort’s early monikers.
Why is Dort so mad? On January 2, KrebsOnSecurity published The Kimwolf Botnet is Stalking Your Local Network, which explored research into the botnet by Benjamin Brundage, founder of the proxy tracking service Synthient. Brundage figured out that the Kimwolf botmasters were exploiting a little-known weakness in residential proxy services to infect poorly-defended devices — like TV boxes and digital photo frames — plugged into the internal, private networks of proxy endpoints.
By the time that story went live, most of the vulnerable proxy providers had been notified by Brundage and had fixed the weaknesses in their systems. That vulnerability remediation process massively slowed Kimwolf’s ability to spread, and within hours of the story’s publication Dort created a Discord server in my name that began publishing personal information about and violent threats against Brundage, Yours Truly, and others.

Dort and friends incriminating themselves by planning swatting attacks in a public Discord server.
Last week, Dort and friends used that same Discord server (then named “Krebs’s Koinbase Kallers”) to threaten a swatting attack against Brundage, again posting his home address and personal information. Brundage told KrebsOnSecurity that local police officers subsequently visited his home in response to a swatting hoax which occurred around the same time that another member of the server posted a door emoji and taunted Brundage further.

Dort, using the alias “Meow,” taunts Synthient founder Ben Brundage with a picture of a door.
Someone on the server then linked to a cringeworthy (and NSFW) new Soundcloud diss track recorded by the user DortDev that included a stickied message from Dort saying, “Ur dead nigga. u better watch ur fucking back. sleep with one eye open. bitch.”
“It’s a pretty hefty penny for a new front door,” the diss track intoned. “If his head doesn’t get blown off by SWAT officers. What’s it like not having a front door?”
With any luck, Dort will soon be able to tell us all exactly what it’s like.
Update, 10:29 a.m.: Jacob Butler responded to requests for comment, speaking with KrebsOnSecurity briefly via telephone. Butler said he didn’t notice earlier requests for comment because he hasn’t really been online since 2021, after his home was swatted multiple times. He acknowledged making and distributing a Minecraft cheat long ago, but said he hasn’t played the game in years and was not involved in Dortsolver or any other activity attributed to the Dort nickname after 2021.
“It was a really old cheat and I don’t remember the name of it,” Butler said of his Minecraft modification. “I’m very stressed, man. I don’t know if people are going to swat me again or what. After that, I pretty much walked away from everything, logged off and said fuck that. I don’t go online anymore. I don’t know why people would still be going after me, to be completely honest.”
When asked what he does for a living, Butler said he mostly stays home and helps his mom around the house because he struggles with autism and social interaction. He maintains that someone must have compromised one or more of his old accounts and is impersonating him online as Dort.
“Someone is actually probably impersonating me, and now I’m really worried,” Butler said. “This is making me relive everything.”
But there are issues with Butler’s timeline. For example, Jacob’s voice in our phone conversation was remarkably similar to the Jacob/Dort whose voice can be heard in this Sept. 2022 Clash of Code competition between Dort and another coder (Dort lost). At around 6 minutes and 10 seconds into the recording, Dort launches into a cursing tirade that mirrors the stream of profanity in the diss rap that Dortdev posted threatening Brundage. Dort can be heard again at around 16 minutes; at around 26:00, Dort threatens to swat his opponent.
Butler said the voice of Dort is not his, exactly, but rather that of an impersonator who had likely cloned his voice.
“I would like to clarify that was absolutely not me,” Butler said. “There must be someone using a voice changer. Or something of the sorts. Because people were cloning my voice before and sending audio clips of ‘me’ saying outrageous stuff.”
Further reading:
Jan. 8, 2026: Who Benefited from the Aisuru and Kimwolf Botnets?
Jan. 20, 2026: Kimwolf Botnet Lurking in Corporate, Govt. Networks
Jan. 26, 2026: Who Operates the Badbox 2.0 Botnet?
Feb. 11, 2026: Kimwolf Botnet Swamps Anonymity Network I2P
Mar. 19, 2026: Feds Disrupt IoT Botnets Behind Huge DDoS Attacks
Most phishing websites are little more than static copies of login pages for popular online destinations, and they are often quickly taken down by anti-abuse activists and security firms. But a stealthy new phishing-as-a-service offering lets customers sidestep both of these pitfalls: It uses cleverly disguised links to load the target brand’s real website, and then acts as a relay between the victim and the legitimate site — forwarding the victim’s username, password and multi-factor authentication (MFA) code to the legitimate site and returning its responses.
There are countless phishing kits that would-be scammers can use to get started, but successfully wielding them requires some modicum of skill in configuring servers, domain names, certificates, proxy services, and other repetitive tech drudgery. Enter Starkiller, a new phishing service that dynamically loads a live copy of the real login page and records everything the user types, proxying the data from the legitimate site back to the victim.
According to an analysis of Starkiller by the security firm Abnormal AI, the service lets customers select a brand to impersonate (e.g., Apple, Facebook, Google, Microsoft et. al.) and generates a deceptive URL that visually mimics the legitimate domain while routing traffic through the attacker’s infrastructure.
For example, a phishing link targeting Microsoft customers appears as “login.microsoft.com@[malicious/shortened URL here].” The “@” sign in the link trick is an oldie but goodie, because everything before the “@” in a URL is considered username data, and the real landing page is what comes after the “@” sign. Here’s what it looks like in the target’s browser:

Image: Abnormal AI. The actual malicious landing page is blurred out in this picture, but we can see it ends in .ru. The service also offers the ability to insert links from different URL-shortening services.
Once Starkiller customers select the URL to be phished, the service spins up a Docker container running a headless Chrome browser instance that loads the real login page, Abnormal found.
“The container then acts as a man-in-the-middle reverse proxy, forwarding the end user’s inputs to the legitimate site and returning the site’s responses,” Abnormal researchers Callie Baron and Piotr Wojtyla wrote in a blog post on Thursday. “Every keystroke, form submission, and session token passes through attacker-controlled infrastructure and is logged along the way.”
Starkiller in effect offers cybercriminals real-time session monitoring, allowing them to live-stream the target’s screen as they interact with the phishing page, the researchers said.
“The platform also includes keylogger capture for every keystroke, cookie and session token theft for direct account takeover, geo-tracking of targets, and automated Telegram alerts when new credentials come in,” they wrote. “Campaign analytics round out the operator experience with visit counts, conversion rates, and performance graphs—the same kind of metrics dashboard a legitimate SaaS [software-as-a-service] platform would offer.”
Abnormal said the service also deftly intercepts and relays the victim’s MFA credentials, since the recipient who clicks the link is actually authenticating with the real site through a proxy, and any authentication tokens submitted are then forwarded to the legitimate service in real time.
“The attacker captures the resulting session cookies and tokens, giving them authenticated access to the account,” the researchers wrote. “When attackers relay the entire authentication flow in real time, MFA protections can be effectively neutralized despite functioning exactly as designed.”

The “URL Masker” feature of the Starkiller phishing service features options for configuring the malicious link. Image: Abnormal.
Starkiller is just one of several cybercrime services offered by a threat group calling itself Jinkusu, which maintains an active user forum where customers can discuss techniques, request features and troubleshoot deployments. One a-la-carte feature will harvest email addresses and contact information from compromised sessions, and advises the data can be used to build target lists for follow-on phishing campaigns.
This service strikes me as a remarkable evolution in phishing, and its apparent success is likely to be copied by other enterprising cybercriminals (assuming the service performs as well as it claims). After all, phishing users this way avoids the upfront costs and constant hassles associated with juggling multiple phishing domains, and it throws a wrench in traditional phishing detection methods like domain blocklisting and static page analysis.
It also massively lowers the barrier to entry for novice cybercriminals, Abnormal researchers observed.
“Starkiller represents a significant escalation in phishing infrastructure, reflecting a broader trend toward commoditized, enterprise-style cybercrime tooling,” their report concludes. “Combined with URL masking, session hijacking, and MFA bypass, it gives low-skill cybercriminals access to attack capabilities that were previously out of reach.”
DISCLAIMER:
If you hold cryptocurrency, there's a very simple golden rule that you should always follow. Never hand over your seed phrase. Garrett Dutton, better known as G. Love - the front man of blues-hip-hop outfit G. Love & Special Sauce - has learnt that lesson the hard way. Read more in my article on the Hot for Security blog.
Have you ever taken a look at your Microsoft 365 mailbox rules? If not, it might be worth a few minutes of your time. Because newly released research reveals that hackers may already have beaten you to it. Read more in my article on the Fortra blog.
A hacking group claims to have broken into the flood defence system protecting Venice's Piazza San Marco - and is offering to sell access to whoever wants it. The asking price? A frankly insulting $600. Meanwhile, Anthropic accidentally leaked the source code for Claude Code via a basic packaging mistake. Oh, and by the way, they've also just revealed they've built an AI model called Mythos that can find and chain together software vulnerabilities faster than any human. Sleep well. All this and more in episode 463 of the “Smashing Security” podcast with cybersecurity expert and keynote speaker Graham Cluley, joined this week by special guest Tanya Janca.
Cybersecurity researchers have revealed that 108 malicious Google Chrome extensions have been quietly stealing user credentials, hijacking Telegram sessions, and injecting unwanted ads and scripts into browsers - all reporting back to the same central point. Read more in my article on the Hot for Security blog.
The fraud landscape has been changed by AI and cryptocurrency in a way that should concern organisations and individuals alike. Read more in my article on the Fortra blog.
LinkedIn has been secretly scanning your browser for over 6,000 installed extensions — on every single click you make. It can tell if you're job hunting, what religion you are, and whether you have ADHD. And none of this is mentioned anywhere in their privacy policy. Meanwhile, California's crypto millionaires are learning that no amount of encryption can protect you from someone who knocks on your door pretending to deliver a pizza. All this and more in episode 462 of the “Smashing Security” podcast with cybersecurity expert and keynote speaker Graham Cluley, joined this week by special guest Dave Bittner.
Cambodia has taken a dramatic step in its fight against scam compounds that have imprisoned innocent people, and forced them to work as virtual slaves defrauding victims via the internet around the world with romance scams and dodgy investment schemes. Read more in my article on the Hot for Security blog.
A Nigerian fraudster spent years posing as a woman online, romancing unsuspecting American men out of their savings - until he accidentally tried the same trick on a fellow scammer, who told him to "learn how to do a clean job." The recovered chat logs helped put him behind bars for 15 years. Read more in my article on the Hot for Security blog.
A cannabis-growing, beekeeping, gyrocopter-flying Irishman invested his drug money in Bitcoin back in 2011 - and now sits on a fortune worth $400 million. There's just one small problem: the access codes were tucked inside his fishing rod case, which has mysteriously vanished. Or has it? Because this week, one of his frozen wallets suddenly woke up and moved $35 million - and someone had to identify themselves to do it. Meanwhile, Ajax Football Club scores a spectacular cyber own-goal, as a data breach that the club claimed affected "a few hundred" fans turns out to may have exposed the personal details of 300,000 supporters - along with the ability to steal match tickets and quietly remove people from the stadium ban list. All this and more in episode 461 of the "Smashing Security" podcast with cybersecurity expert and keynote speaker Graham Cluley, joined this week by special guest Danny Palmer.
A man has appeared in federal court in Austin, Texas, after being extradited to the United States to face charges related to his alleged role as a key developer of the notorious RedLine malware. Read more in my article on the Hot for Security blog.
It's not every day that you read that the head of America's top law enforcement agency has been hacked, but then - these aren't ordinary times. Read more in my article on the Hot for Security blog.
World Leaks is a cyber extortion operation that steals sensitive data from organizations and threatens to leak it via the dark web if a ransom is not paid. Read more in my article on the Fortra blog.
A disgruntled data analyst decides that the best response to losing his contract is to steal the entire company payroll database and demand $2.5 million in Bitcoin - signing his extortion emails from a company called "Loot." Meanwhile, two people drive up to the entrance of the UK's nuclear submarine base at Faslane and politely ask if they can have a look around. Tourists? Spies? Something in between? All this and more in episode 460 of the "Smashing Security" podcast with cybersecurity veteran and keynote speaker Graham Cluley, and special guest Jenny Radcliffe.
A man has pleaded guilty to defrauding online music streaming platforms out of more than US $8 million, after creating hundreds of thousands of songs with AI, and then using bots to play them billions of times. Read more in my article on the Hot for Security blog.
Pedestrians crossing a street in Denver, Colorado, got rather more than they bargained for last weekend, when the audio signals at two crosswalks began broadcasting a political message alongside their usual walking instructions. Read more in my article on the Hot for Security blog.
A ransomware gang that claims to be a group of "investigative journalists"? Meet LeakNet - the group using fake CAPTCHA pages to trick employees into hacking themselves. Read more in my article on the Fortra blog.
In episode 459 of Smashing Security, we dive into a chillingly clever account takeover attempt targeting WordPress co-founder Matt Mullenweg - involving MFA fatigue, real Apple alerts, a convincing support call, and a phishing page that oh-so-nearly worked. If a famous techie could have this happen to you, can you be sure you're immune? Plus: would you donate your lifetime medical history to science if you were promised anonymity? We unpack serious concerns around UK Biobank, where “de-identified” data may not be as anonymous as you think — and how surprisingly little information it takes to reveal everything. And! Human-powered “AI”, and a punishment worse than prison: eight hours on the RSA expo floor... All this, and much more, in episode 459 of the "Smashing Security" podcast with cybersecurity veteran and keynote speaker Graham Cluley, and special guest Paul Ducklin.
Drivers in the Russian city of Perm have been enjoying an unexpected bonus this week: free parking. Not because the city council suddenly decided to embrace generosity - but rather because hackers succeeded in knocking the city's payment system offline. Read more in my article on the Hot for Security blog.
If you're in the middle of applying for a planning or zoning permit, there is some unwelcome news: cyber-criminals have found a way to exploit the bureaucratic tedium of the process against you. Read more in my article on the Fortra blog.
Signal, the encrypted messaging app trusted by security-savvy users around the world, has confirmed that hackers have managed to takeover accounts - with government officials and journalists among those being targeted. Read more in my article on the Hot for Security blog.
DISCLAIMER:
The Payouts King ransomware is using the QEMU emulator as a reverse SSH backdoor to run hidden virtual machines on compromised systems and bypass endpoint security. [...]
Kyrgyzstan-based cryptocurrency exchange Grinex has suspended its operations after suffering a $13.7 million hack attributed to Western intelligence agencies. [...]
In cybercrime markets, trust isn't assumed, it's verified. Flare reveals how underground guides teach actors to evaluate carding shops based on data quality, reputation, and survivability. [...]
Cyberattacks are evolving faster than many MSP and corporate defenses can keep up, with phishing driving much of today's cybercrime. Join our upcoming webinar to learn how to combine security and recovery strategies to reduce risk and maintain business continuity. [...]
CISA warned that attackers are now exploiting a high-severity Apache ActiveMQ vulnerability, which was patched earlier this month after going undetected for 13 years. [...]
Microsoft warns that some Windows domain controllers are entering restart loops after installing the April 2026 security updates. [...]
23-year-old Kamerin Stokes of Memphis, Tennessee, was sentenced to 30 months in prison for selling access to tens of thousands of hacked DraftKings accounts. [...]
Threat actors are exploiting three recently disclosed Windows security vulnerabilities in attacks aimed at gaining SYSTEM or elevated administrator permissions. [...]
The latest wave of "Operation PowerOFF," on April 13, 2026, targeted the distributed denial-of-service (DDoS) ecosystem and its users across 21 countries. [...]
A new malware called ZionSiphon, specifically designed for operational technology, is targeting water treatment and desalination environments to sabotage their operations. [...]
A researcher known as "Chaotic Eclipse" has published a proof-of-concept exploit for a second Microsoft Defender zero-day, dubbed "RedSun," in the past two weeks, protesting how the company works with cybersecurity researchers. [...]
Hackers are exploiting a critical vulnerability in Marimo reactive Python notebook to deploy a new variant of NKAbuse malware hosted on Hugging Face Spaces. [...]
Google says it is increasingly using its Gemini AI models to detect and block harmful ads on its advertising platforms, as scammers and threat actors continue to evolve their tactics to evade detection. [...]
A new cybercrime platform called ATHR can harvest credentials via fully automated voice phishing attacks that use both human operators and AI agents for the social engineering phase. [...]
AI-powered SOC tools promise automation, but most only speed up triage instead of reducing real workload. Tines shows how real gains come from end-to-end workflows that execute actions across systems, not just summarize alerts. [...]
DISCLAIMER:
Signal, the privacy-focused messaging app, has announced new features to enhance its calling experience, making it easier for users to initiate and manage group calls. The primary addition, “Call Links,” allows users to share a link to initiate a call with any contact on Signal without the need to create a group chat. This feature …
The post Signal Introduces Call Links for Simplified Private Group Calls appeared first on RestorePrivacy.
The Tor Project is currently facing an unusual, ongoing attack aimed at its infrastructure. For several weeks, an unknown threat actor has been spoofing the IP addresses of Tor relays and directory authorities, sending fake TCP SYN packets over SSH’s port 22. This technique has led to a flood of abuse complaints directed at Tor …
The post Tor Relays Targeted in IP Spoofing Campaign Causing Widespread Disruptions appeared first on RestorePrivacy.
Proton has launched its much-anticipated Black Friday sale for 2024, offering incredible discounts on services like Proton VPN, Proton Mail, Drive, and Pass. These Proton deals all include a 30-day money-back guarantee, allowing you to assess the service risk-free. This sale is the perfect chance to boost your online privacy and access premium features at …
The post Proton Black Friday Deals Go Live: VPN, Mail, Drive, Pass appeared first on RestorePrivacy.
Session, the encrypted messaging app known for its commitment to privacy and decentralization, announced a change of base from Australia to Switzerland. The app will now be overseen by the newly formed Session Technology Foundation (STF), based in central Europe. This move follows increasing regulatory pressure on privacy technologies in Australia, where the app was …
The post Encrypted Messenger Session Moves to Switzerland Amid Privacy Concerns appeared first on RestorePrivacy.
Mullvad VPN announced that macOS users may experience traffic leaks after applying recent system updates due to a firewall malfunction. According to a bulletin published earlier today on Mullvad’s blog, the macOS firewall fails to enforce certain routing rules properly, allowing some applications to bypass the VPN tunnel and send traffic outside of it. Mullvad …
The post Mullvad VPN Warns About Traffic Leaks on Latest macOS Sequoia appeared first on RestorePrivacy.
Discord, a popular communication platform, has been blocked in both Russia and Turkey, sparking widespread backlash from users in both countries. In Russia, the block took place yesterday, with the government citing concerns over illegal content, while Turkey implemented blocks a day prior, on October 7, 2024, claiming the platform was being used for criminal …
The post Discord Blocked in Russia and Turkey Amid Government Crackdowns appeared first on RestorePrivacy.
NordVPN, one of the world’s leading VPN service providers, has launched its first application featuring quantum-resilient encryption. Post-quantum cryptography support is currently available on NordVPN’s Linux client, with plans to extend this security to all applications by the first quarter of 2025. The move represents a significant step toward preparing for potential future threats posed …
The post NordVPN Adds NIST-Approved Quantum Encryption on the Linux Client appeared first on RestorePrivacy.
The European privacy rights organization noyb has filed a formal complaint against Mozilla for enabling a new feature in its Firefox browser that allegedly tracks users without their consent. The feature in question, called Privacy-Preserving Attribution (PPA), is designed to measure the effectiveness of online advertisements while minimizing data collection, but noyb claims it violates …
The post Mozilla Faces GDPR Complaint Over Firefox Tracking Users Without Consent appeared first on RestorePrivacy.
Telegram CEO Pavel Durov announced significant updates to the app’s Terms of Service and Privacy Policy, aimed at bringing the popular communications platform in alignment with the request of authorities to bring criminal activity under control. Most notably, Telegram will now share user IP addresses and phone numbers when responding to valid legal requests. Putting …
The post Telegram to Share User Data with Authorities on Legal Requests appeared first on RestorePrivacy.
The Tor Project has issued a statement in response to recent claims of a targeted de-anonymization attack on a Tor user. The attack, reportedly a “timing analysis” method, involved the long-retired Ricochet application. Although the incident raises concerns about the security of Tor’s Onion Services, the project maintains that its network remains healthy and that …
The post Tor Project Reassures Users Amid Claims of De-Anonymization Attack appeared first on RestorePrivacy.
DISCLAIMER:
Is your e-mail address compromised? Check it on this page.
In April 2026, the hacking group ShinyHunters claimed they had breached Amtrak. The group typically compromises organisations' Salesforce instances before demanding a ransom and later, if not paid, dumping the data publicly. They subsequently published the alleged data which contained over 2M unique email addresses along with names, physical addresses and customer support records.
In April 2026, education company McGraw Hill confirmed a data breach following an extortion attempt. Attributed to a Salesforce misconfiguration, the company stated the incident exposed "a limited set of data from a webpage hosted by Salesforce on its platform". More than 100GB of data was later publicly distributed, containing 13.5M unique email addresses across multiple files, with additional fields such as name, physical address and phone number appearing inconsistently across some records.
In March 2026, Hallmark suffered an alleged breach and subsequent extortion after attackers gained access to data stored within Salesforce. The data was later published after the extortion deadline passed, exposing 1.7M unique email addresses across both Hallmark and the Hallmark+ streaming service, along with names, phone numbers, physical addresses and support tickets.
In April 2026, the NSFW AI girlfriend platform My Lovely AI suffered a data breach that exposed over 100k users. The data included user-created prompts and links to the resulting AI-generated images, along with a small number of Discord and X usernames.
In March 2026, the anime streaming service Crunchyroll suffered a data breach alleged to have impacted 6.8M users. The exposed data is reported to have originated from the company's Zendesk support system where "name, login name, email address, IP address, general geographic location and the contents of the support tickets" were exposed. A subset of 1.2M email addresses from an alleged 2M record dataset being sold was later provided to HIBP.
In April 2026, the music trivia platform SongTrivia2 suffered a data breach that was subsequently published to a public hacking forum. The data contained a total of 291k unique email addresses sourced from either Google OAuth logins or accounts created on the site, the latter also containing bcrypt password hashes. The data also included names, usernames and avatars.
In March 2026, the personal development and achievement media brand SUCCESS suffered a data breach. The incident exposed 250k unique email addresses along with names, IP addresses, phone numbers and, for a limited number of staff members, bcrypt password hashes. The data also included orders containing physical addresses and the payment method used. In SUCCESS' disclosure notice, they advised their system had also been abused to send offensive newsletters with quotes falsely attributed to contributors.
In March 2026, the NSFW AI companion platform Cuties AI suffered a data breach that was subsequently published to a public hacking forum. The incident exposed 144k unique email addresses along with display names, avatars, prompts and descriptions used to generate AI adult images, as well as URLs to the generated content. The data also included the account that created the content and a stated "preference" of either female or trans.
In March 2026, a breach of one of the many iterations of the BreachForums hacking forum known as "Version 5" was publicly disclosed. The incident exposed 340k unique email addresses along with usernames and argon2 password hashes.
In June 2015, custom gaming controller maker Scuf Gaming suffered a data breach. The incident exposed 129k unique email addresses along with usernames, display names, IP addresses and password hashes.
In March 2026, the audio production tools company Sound Radix disclosed a data breach that they subsequently self-submitted to HIBP. The incident impacted 293k unique email addresses and names. Sound Radix advised that it is possible that additional data including hashed passwords may have been exposed, and that no financial or credit card information was impacted.
In around 2011, the now defunct RuneScape Boards forum (also known as RSBoards) suffered a data breach that was later redistributed as part of a larger corpus of data. The vBulletin-based service exposed 223k unique email addresses along with usernames, IP addresses and salted MD5 password hashes.
In March 2026, the online safety service Aura disclosed a data breach that exposed 900k unique email addresses. The data was primarily associated with a marketing tool from a previously acquired company, with fewer than 20k active Aura customers affected. Exposed data included names, phone numbers, physical and IP addresses, and customer service notes. Aura advised that no Social Security numbers, passwords or financial information were compromised.
In March 2026, the League of Legends custom skins service Divine Skins suffered a data breach. The incident was disclosed via the service's Discord server, where Divine Skins stated that an unauthorised third party accessed part of its systems, deleted all skins from the database and exposed email addresses and usernames. The data also contained a history of purchases made by users.
In March 2026, the Turkish restaurant chain Baydöner suffered a data breach which was subsequently published to a public hacking forum. The incident exposed over 1.2M unique email addresses along with names, phone numbers, cities of residence and plaintext passwords. A small number of records also included Turkish national ID number and date of birth. In their disclosure notice, Baydöner stated that payment and financial data was not affected.
In early 2026, data purportedly sourced from the recipe and meal planning service Provecho was alleged to have been obtained in a breach. The exposed data included 713k unique email address along with username and the creator account holders followed. Provecho has been notified and is aware of the claims surrounding the incident.
In February 2026, the couples and relationship app Lovora allegedly suffered a data breach that exposed 496k unique email addresses. The data also included users’ display names and profile photos, along with other personal information collected through use of the app. The app’s maker, Plantake, did not respond to multiple attempts to contact them about the incident.
In February 2026, the porn addiction app Quitbro allegedly suffered a data breach that exposed 23k unique email addresses. The data also included users’ years of birth, responses to questions within the app and their last recorded relapse time. The app’s maker, Plantake, did not respond to multiple attempts to contact them about the incident.
In February, the AI-powered comic generation platform KomikoAI suffered a data breach. The incident exposed 1M unique email addresses along with names, user posts and the AI prompts used to generate content. The exposed data enables the mapping of individual AI prompts to specific email addresses.
In February 2026, Dutch telco Odido was the victim of a data breach and subsequent extortion attempt. Shortly after, a total of 6M unique email addresses were published across four separate data releases over consecutive days. The exposed data includes names, physical addresses, phone numbers, bank account numbers, dates of birth, customer service notes and passport, driver’s licence and European national ID numbers. Odido has published a disclosure notice including an FAQ to support affected customers.
DISCLAIMER:
“We’ll have a generation of security professionals who can supervise AI but can’t function without it."
Categories: AI Research, Sophos Insights
Tags: AI, AI Cybersecurity, AI RESEARCH, Generative AI, SOC
Following on from our preview, here’s the full rundown on LLM salting: a novel countermeasure against LLM jailbreaks, developed by AI researchers at Sophos X-Ops
Categories: AI Research
Tags: AI, CAMLIS, Featured, jailbreak, LLM, salting, Sophos X-Ops
On October 22-24, SophosAI will present research on ‘LLM salting’ (a novel countermeasure against jailbreaks) and command line classification at CAMLIS 2025
Categories: AI Research
Tags: AI, CAMLIS, Featured, LLM, Sophos X-Ops
Analyzing dark web forums to identify key experts on e-crime
Categories: AI Research, Threat Research
Tags: AI, cybercrime, Dark Web, Featured, threat activity cluster, threat actors
Sophos X-Ops’ research, presented at Virus Bulletin 2024, uses ‘multimodal’ AI to classify spam, phishing, and unsafe web content
Categories: AI Research
Tags: Featured, Large Language Models, Multimodal AI, Sophos X-Ops, spam detection, Web Content Filtering
SophosAI’s framework for upgrading the performance of LLMs for cybersecurity tasks (or any other specific task) is now open source.
Categories: AI Research
Tags: deepspeed, Featured, LLM, LLM tuning
“LLMbotomy” research reveals how Trojans can be injected into Large Language Models, and how to disarm them.
Categories: AI Research
Tags: AI Trojans, Featured, LLM
On October 24 and 25, SophosAI presents ideas on how to use models large and small—and defend against malignant ones.
Categories: AI Research
Tags: AI Trojans, anti-phishing, CAMLIS, Featured, Google, LLM, small model machine learning
Applying generative AI, bad actors could tailor disinformation campaigns to affect election outcomes on a massive scale with relatively little effort.
Categories: AI Research
Tags: adversarial ai, Featured, Generative AI, misinformation, scampaign
Sophos' Younghoo Lee will present his research on the use of AI to analyze both text and image data to classify spam, phishing, and unsafe web content in Dublin.
Categories: AI Research
Tags: anti-phishing, Featured, Large Language Models, Multimodal AI, spam detection, Web Content Filtering
Comparative Sophos X-Ops testing not only indicates which models fare best in cybersecurity, but where cybersecurity fares best in AI
Categories: AI Research
Tags: Featured, Large Language Models
Categories: AI Research, Threat Research
Tags: adversarial ai, artificial intelligence, Featured, Generative AI, scams, Sophos X-Ops
The conference on machine learning in cybersecurity is key to open exchange of research and knowledge.
Categories: AI Research
Tags: artificial intelligence, CAMLIS, Featured, Large Language Models, scams, Web Content Filtering
AI Village talk highlights how generative can be used to automate the creation of fraud campaigns, generating hundreds of fraudulent sites.
Categories: AI Research
Tags: adversarial ai, DEF CON, Generative AI, Large Language Models, web scams
Sophos AI team employs GPT and other large language models as teachers to train smaller models to label websites.
Categories: AI Research
Tags: BERT, Featured, GPT-3, Large Language Models, Sophos X-Ops, T5 Large LLM, Web filtering, website categorization
DISCLAIMER:

An anonymous cybersecurity researcher discovered and reported to Safety Detectives about an unencrypted and non-password-protected database that contained approximately 7,000 records. Exposed data included names, email addresses, phone numbers, security clearance status or level, and other personal information.
The publicly exposed database was not password-protected or encrypted. It contained 7,028 records marked as “resume bank data” with potentially sensitive applicant information. In a reverse DNS search, it was identified that the IP address that hosted the documents traced back to a website called DomeWatch.us. According to information posted on House.gov by the Democratic Whip, DomeWatch is the House Democrats’ Official Online Resume Bank. On its Jobs section, DomeWatch posts current openings across Democratic Members’ offices and committees on Capitol Hill as well as related internships or fellowships. Individuals can submit their resumes using either the employment portal (which was created in November 2012) or the official mobile apps for both iOS and Android. The submissions are accessible by Senate Democratic offices.
The registration and technical contacts of the domain were promptly notified of the exposure. Public access to the database was restricted the same day, and it was no longer visible. Later on, they replied with a message that read: “Thanks for flagging”. In the About Us section of the website, it states that resumes remain in the bank for 90 days; once 3-months-old, the resume is automatically archived. However, nearly all of the records exposed were indicated with timestamps circa 2024-2025. It is unclear if this was a backup of archive data or otherwise. It is also unclear why these records appeared to have been kept for longer than the stated dates of storage.
The records indicated fields with information such as: internal ID numbers, application codes, first name, last name, phone number, email address, bio or congress experience, education, military service, security clearance and level, office interest, interest issues, home state, languages, political party affiliation, action tokens, and more. In total, the records listed 469 individuals with “top secret” federal security clearance as well as 4,221 individuals with congress experience. In regards to political affiliation, 6,300 individuals listed marked the Democratic Party; 17, the Republican Party; and 265, “Independent” or “Other”. The database also contained weblinks to Google forms and other documents.
According to the description on the Google Play Store: DomeWatch is a product of the Office of Democratic Whip Katherine Clark. It is designed to help House staff, the press, and the public better follow the latest developments from the US House of Representatives Floor. The app uses data from both majorityleader.gov and demcom.house.gov, which is the official intranet for House Democratic staff (available only within the House of Representatives firewall).





Any data exposure of a resume bank that contains potentially sensitive applicant information presents significant cybersecurity and privacy risks. When it comes to social engineering and phishing, the more personally identifiable information available, the more it may increase the potential success rate of a targeted attack. These records pose additional risks due to the fact that many of these individuals have working or volunteering experience in the government, Congress, political campaigns, or the military. Many of them also have security clearances, language skills, and political party affiliations that may potentially be of interest to malefactors.
In the current political environment, profiling and targeted harassment are notable potential risks. Another serious concern would be adversaries targeting specific individuals with privileged access to government systems, making them potentially high-value targets for espionage, recruitment, or blackmail. This isn’t an assertion that there are any national security risks to this exposure or that the data was ever at risk. These details are only here to provide hypothetical risk scenarios for educational purposes.
According to reports by AP, in July 2025, criminals used AI to create a deepfake of US Secretary of State Marco Rubio and attempted to contact foreign ministers. This raises serious potential concerns of how these individuals could be targeted for AI-assisted social engineering attempts, as many of them are currently (or have been previously) employed by members of Congress.
It is highly recommended that individuals who believe their PII or contact details may have potentially been exposed in any data breach take additional steps to validate job opportunities or suspicious communications. It is a good idea to enable MFA on email and mobile accounts that are associated with the potentially exposed data. Change passwords of affected accounts and never reuse passwords or variants of previously used passwords. For individuals with security clearance, there may be additional requirements to report the potential exposure so the incident is documented and any necessary mitigations can be applied. Strictly communicate through official channels and validate that the person or office is who they claim to be.
It is not known what internal safeguards are in place to protect congressional staff, interns, and volunteers. Hypothetically, these individuals could be potential targets because attackers might believe that their email accounts or contacts could provide policy intelligence, influence campaigns, or access government systems. It is not implied that there was ever any risk to this exposure. It is not known if the data was accessed by anyone else or how long the database was publicly exposed.
No wrongdoing by DomeWatch, or its employees, agents, contractors, affiliates, and/or related entities is implied here. It is not claimed either that any internal, applicant, or user data was ever at imminent risk. This report was published to raise public awareness and help strengthen data protection and cybersecurity practices. The hypothetical data-risk scenarios presented in this report are strictly and exclusively for educational purposes and do not reflect, suggest, or imply any actual compromise of data integrity.
The Safety Detectives’ Cybersecurity Team didn’t get access to the database, which means we could not download, retain, or share any data. This report has been shared with our team by an anonymous cybersecurity researcher. The limited number of redacted screenshots included in this article are used solely for verification and documentation purposes. We disclaim any and all liability arising from the use, interpretation, or reliance on this disclosure. We publish our findings to raise awareness of issues of data security and privacy.
The Safety Detectives research lab is a pro bono service that aims to help the online community defend itself against cyber threats while educating organizations on how to protect their users’ data. The overarching purpose of our web mapping project is to help make the internet a safer place for all users.
Our previous reports have brought multiple high-profile data leaks to light, including 61 million records allegedly belonging to Verizon USA and listed for sale on a well-known hacker’s forum.
Our previous work also includes the discovery of a clear web forum post where a threat actor publicized a database with 10,000 records allegedly belonging to VirtualMacOSX.

A ransomware attack targeting Collins Aerospace’s MUSE check-in software caused widespread disruption across European airports beginning Friday, with continued delays and flight cancellations reported through the weekend.
The European Union Agency for Cybersecurity (ENISA) confirmed the incident on Monday, stating that “the type of ransomware has been identified. Law enforcement is involved to investigate.” Affected airports included London Heathrow, Brussels Zaventem, Berlin Brandenburg, and others using Collins’ automated check-in systems.
The attack disabled critical airline services, forcing airports to revert to manual boarding processes. Heathrow Airport told Reuters that “airlines across Heathrow have implemented contingencies whilst their supplier Collins Aerospace works to resolve an issue.” By Sunday, about half the airlines operating from Heathrow had restored partial access using backup systems.
The BBC obtained internal crisis memos showing Heathrow staff were instructed to continue manual check-ins while Collins rebuilt infected systems. However, the same memo warned that “more than a thousand computers may have been ‘corrupted’” and cleanup was mostly being done in person due to continued hacker presence within systems.
Brussels Airport canceled more than 130 outbound flights on Monday, while Berlin reported over an hour of delays for many departures. The Berlin Marathon worsened congestion at Brandenburg Airport, with passengers describing the experience as similar to early commercial air travel.
Collins Aerospace, a subsidiary of RTX, said on Monday it was “in the final stages of completing necessary software updates.” The company has not disclosed the exact nature of the ransomware strain, but reports suggest it may be linked to a group using the HardBit variant.
UK police have since arrested a man in his 40s in West Sussex in connection with the attack under the Computer Misuse Act. He has been released on conditional bail pending further investigation.
While ENISA and national agencies continue their inquiry, security experts like Sophos’ Rafe Pilling caution that “disruptive attacks are becoming more visible in Europe, but visibility doesn’t necessarily equal frequency.”

Cloudflare has successfully mitigated the largest distributed denial-of-service (DDoS) attack ever recorded, showcasing a concerning escalation in the scale of cyber threats.
“Cloudflare just autonomously blocked hyper-volumetric DDoS attacks twice as large as anything seen on the Internet before — peaking at 22.2 Tbps & 10.6 Bpps,” the company said in a tweet.
The previous record was an 11.5 Tbps UDP flood attack, which lasted 35 seconds. In contrast, Cloudflare’s report indicates that the latest attack lasted only about 40 seconds, which is a “hit-and-run” tactic designed to overwhelm defenses before they can respond fully.
This record-breaking incident combined multiple attack techniques in a single, massive multi-vector assault. Experts say such attacks are typically launched from enormous botnets (networks of compromised computers and IoT devices) that flood servers with traffic, rendering online services inaccessible to legitimate users.
Crucially, Cloudflare’s systems detected and blocked the attack autonomously, without any human intervention. By neutralizing the traffic at the network edge, close to its source, Cloudflare ensured that the intended targets remained fully operational.
Cloudflare’s success proves the growing importance of automated, machine learning-powered defenses, as traditional DDoS “scrubbing” centers, which are often reliant on manual traffic analysis, are ill-equipped to respond at this speed and scale.
As cybercriminals continue to refine their methods and expand their botnets, industry experts warn that hyper-volumetric DDoS attacks will likely become more frequent and more intense.

Valve has pulled the 2D platformer BlockBlasters from Steam after a malicious update enabled it to steal over $150,000 in cryptocurrency from users, including $32,000 from a Latvian streamer raising funds for cancer treatment. As reported by BleepingComputer and confirmed by malware researchers at G Data, the game was originally published on July 30, 2025, by Genesis Interactive and appeared legitimate, even earning more than 200 “Very Positive” reviews.
But a patch released on August 30 silently injected a cryptostealer, which began exfiltrating sensitive data such as crypto wallets, Steam credentials, browser extensions, and IP information from users’ machines. The campaign appears to have been targeted, with vx-underground reporting that “the Steam game was actually a cryptodrainer masquerading as a legitimate video game” and that some streamers were approached with fake promotional offers.
G Data’s analysis of the infected patch found a staged malware structure starting with a batch script named game2.bat, which checked for antivirus tools, harvested user information, and uploaded the data to a remote C2 server. Additional scripts (launch1.vbs, test.vbs) and executables (Client-built2.exe, Block1.exe) then loaded a Python-based backdoor and the StealC info-stealer. The malware added folder exclusions to Microsoft Defender and hid its actions behind the game’s launcher.
Latvian streamer Raivo Plavnieks (RastalandTV), who has stage 4 cancer, said they were infected during a live fundraiser. “For anybody wondering what is going on … my life was saved … until someone tuned in my stream and got me to download verified game on @Steam,” he posted on X.
Steam removed BlockBlasters on September 21. The incident follows a growing pattern of malware-laced games slipping past Valve’s initial screening, including Chemia and PirateFi. G Data noted that “hundreds of users are potentially affected” by the BlockBlasters campaign, which used password-protected archives and deprecated RC4 encryption to bypass detection.
As of early September, the game still had active players and was flagged as suspicious on SteamDB, reinforcing concerns about malware threats on mainstream game platforms.

Mexico’s Senate is moving forward with a new cybersecurity work agenda that could reshape the country’s digital regulation landscape. Led by the Senate’s Digital Rights Commission, the initiative seeks to develop and approve a comprehensive national cybersecurity law covering data protection, digital commerce, and online expression.
“With the Agency for Digital Transformation and Telecommunications, we discussed several topics, one of them being the organization of dialogue tables on cybersecurity to prepare the ruling on three initiatives that are in commissions for a national cybersecurity law,” said Luis Donaldo Colosio, President of the Digital Rights Commission.
The Senate aims to respond to the country’s fragmented cybersecurity framework, which currently lacks unified regulation. Existing laws criminalize certain cyber activities and mandate data protection, but oversight is split across multiple agencies. A recent legislative reshuffle has intensified the urgency, after the dissolution of Mexico’s data protection authority INAI and growing concerns about centralized power over digital governance.
According to the Digital Rights Commission, the absence of robust legislation “creates uncertainty for companies operating in the digital sector and exposes citizens to significant risks.” The new work plan includes cybersecurity training workshops during October, designated as Cybersecurity Month, as well as forums in November to update the General Law of Digital Rights.
The effort also includes a gender lens. A workshop titled “Legislating with a Gender Perspective in the Ecosystem” will be held in collaboration with Mujeres por más mujeres to help legislative teams embed equality into new digital policies.
If passed, the law would establish safeguards across digital platforms, social networks, and e-commerce tools, with a specific emphasis on protecting minors. The framework would also address the intersection of cybersecurity and free speech, a point that has drawn scrutiny in previous legislative proposals.
The final objective, Colosio noted, is to “establish a safer, more predictable, and equitable digital environment for all stakeholders.”

The Central Bank of Kenya (CBK) has launched the Banking Sector Cybersecurity Operations Centre (BS-SOC), a centralized facility aimed at improving cyber resilience across the country’s financial system.
Hosted within the CBK’s Cyber Fusion Unit, the BS-SOC will provide cyber threat intelligence, incident response, digital forensics, and cyber investigations. According to CBK, the centre is “a key part of the implementation of the Computer Misuse and Cybercrime (Critical Information Infrastructure and Cybercrime Management) Regulations, 2024” and aligns with the CBK Strategic Plan 2024–2027.
The launch comes amid a sharp rise in cyberattacks. Kenya’s Communications Authority reported 4.5 billion cyber threat events between April and June 2025, up 80.7% from the previous quarter. CBK’s own stress tests in May modeled a 5% chance of successful cyberattacks, with potential losses ranging from KSh 32.8 million to KSh 2.9 billion depending on severity.
CBK said it is working to harmonize the Commercial Banks Cybersecurity Guidelines (2017) and the Payment Service Providers Cybersecurity Guidelines (2019) with the 2024 regulations. In the meantime, regulated institutions are expected to comply with all three and report incidents to the BS-SOC within the stipulated timelines.
“The successful implementation of this initiative requires the full collaboration and cooperation of all stakeholders,” the CBK noted in its official statement. Governor Kamau Thugge added that “cyber threats continue to evolve. A sector-wide response is essential to protect Kenya’s financial system.”
Data from CBK also shows that cybercriminals siphoned KSh 1.59 billion from customer accounts in 2024, further underscoring the need for coordinated monitoring and response.
By integrating enforcement and threat response under one roof, CBK hopes to reduce fragmentation and give regulators better visibility into systemic cyber risks affecting banks and payment providers across Kenya.

The City of Yellowknife says its network has been safely restored following a cybersecurity incident that disrupted services for over a week.
The attack, first disclosed on September 15, forced the city to limit internal access and temporarily disable online services. Debit and credit card payments were suspended, library computers were offline, and patrons were restricted to borrowing five items at a time. As of Monday, most systems have returned to normal.
Public safety and critical infrastructure continued to operate throughout. “The city enacted its incident response protocols to contain the incident, including the implementation of additional measures to further enhance its network security,” officials said in a statement cited by NNSL.
Click and Fix YK, the city’s issue-reporting portal, remains offline, as does CityExplorer, its interactive mapping tool. Residents are being asked to email non-emergency issues while restoration continues.
There is no evidence of data loss so far. “To date, we have no evidence that any personal information was compromised in the incident,” the city confirmed. “In the event our investigation determines that personal information was compromised, we will contact those individuals directly.”
City Manager Stephen Van Dine told Cabin Radio the network breach was being handled carefully, saying, “We believe it is under control at this stage… we’re certainly more confident than we were 48 hours ago.” He noted there was no ransom demand and declined to label the event a confirmed cyberattack, only that “there was some kind of activity to get into our systems that shouldn’t be there.”
Third-party experts continue to assist with the investigation, and the city has promised a thorough post-incident review to evaluate the timeline, impacts, and potential long-term upgrades to network defenses.

SonicWall has disclosed a security incident involving its MySonicWall cloud backup service, confirming that threat actors gained access to a subset of firewall configuration files. The company said that fewer than 5% of its firewall install base was affected, but acknowledged the potential severity of the breach.
The attack involved a series of brute force attempts targeting the MySonicWall.com portal, allowing unauthorized access to firewall preference files stored in cloud backups. While credentials within the files were encrypted, SonicWall warned that “the files also included information that could make it easier for attackers to potentially exploit the related firewall.”
Security researchers noted that these configuration files often contain DNS, log, and user/group settings — sensitive data that could be leveraged in future attacks. As Arctic Wolf researchers pointed out, “nation-state hackers and ransomware groups previously have exploited such information to conduct subsequent attacks.”
SonicWall emphasized that this was not a ransomware event, stating it was “a series of brute force attacks aimed at gaining access to the preference files stored in backup.” The company has terminated the unauthorized backup point and is working with cybersecurity partners and law enforcement to assess the full scope of the breach.
The Cybersecurity and Infrastructure Security Agency (CISA) also issued an alert urging immediate action. “Customers with at-risk devices should implement the advisory’s containment and remediation guidance immediately,” the agency said.
SonicWall has published detailed guidance for users to determine if their firewall devices are affected. Impacted customers are advised to log in to their MySonicWall accounts, check for flagged serial numbers under the Product Management section, and follow the remediation steps, including credential resets and service reviews.
At present, there is no indication that the compromised files have been leaked online. However, the company stated that it will continue to monitor the situation and release further updates as necessary.

OpenAI is preparing stricter safety features for ChatGPT as it faces mounting lawsuits and scrutiny over teen protection. CEO Sam Altman confirmed the company will soon require users to verify their age if it suspects a user is under 18, saying the changes are meant to “prioritize safety ahead of privacy and freedom for teens.”
“When you log in to ChatGPT, a banner will appear asking you to verify your age,” the company explained. “You will have 60 days to complete this process, after which your access to ChatGPT will be blocked until you successfully complete the age verification process.”
OpenAI will rely on third-party service Yoti to perform the checks. “You will be asked to enter the necessary details to confirm your age,” the post continued. “Depending on the method you choose, you may be asked to take a selfie, upload a valid ID, or use the Yoti app. Once your age is verified, you will be redirected to ChatGPT and can continue using the service as usual.”
The system will automatically place under-18 users into a restricted version of ChatGPT, which blocks sexual content and adds safeguards. Parents will soon be able to link accounts to monitor chats, disable history, enforce blackout hours, and receive alerts if the AI detects signs of acute distress. OpenAI noted that in some cases, “we may involve law enforcement as a next step.”
The rollout comes as lawmakers question whether AI can reliably predict age. Researchers warn that language-based cues are easily manipulated, while recent lawsuits accuse ChatGPT of failing to prevent harm in long sessions with vulnerable teens.
Despite concerns about privacy trade-offs, Altman stood by the decision. “Not everyone will agree with how we are resolving that conflict,” he said, “but we believe it is a worthy tradeoff.”

CrowdStrike and Meta have jointly released CyberSOCEval, a new open-source benchmark suite designed to evaluate how large language models (LLMs) perform across critical security operations center (SOC) tasks like malware analysis, incident response, and threat detection.
Built on Meta’s CyberSecEval framework and integrated with CrowdStrike’s threat intelligence, the tool aims to give organizations a standardized way to test the effectiveness of AI models under real-world attack conditions. The benchmark suite, now available on GitHub, includes documentation, sample datasets, and guidance for integrating the tests into existing SOC environments.
The rise of AI in cybersecurity has made it harder for teams to choose the right tools. Many security products now claim AI capabilities, but without clear benchmarks, it’s been difficult to assess which models deliver real-world value. CyberSOCEval addresses this by simulating adversarial tactics and complex security scenarios, allowing teams to validate LLM performance before deployment.
Vincent Gonguet, Director of Product, GenAI at Superintelligence Labs at Meta, said the collaboration “introduces a new open source benchmark suite to evaluate the capabilities of LLMs in real world security scenarios. With these benchmarks in place, and open for the security and AI community to further improve, we can more quickly work as an industry to unlock the potential of AI in protecting against advanced attacks.”
Daniel Bernard, Chief Business Officer at CrowdStrike, added that “when two leaders like CrowdStrike and Meta come together, it’s larger than collaboration, it’s about setting the direction of cybersecurity for the AI era,” emphasizing the benchmark’s role in helping security teams adopt AI with confidence.
The companies hope CyberSOCEval will support both enterprise users and AI developers. Businesses get a transparent framework for comparison, while developers gain feedback on how their models handle realistic security workflows, including complex reasoning and industry-specific language.
ALL RSS FEEDS