In today’s cloud-first world, Amazon Web Services (AWS) is a cornerstone of digital transformation, supporting everyone from fast-moving startups to global enterprises. Its flexibility and scale allow organizations to store, process, and analyze enormous volumes of data in minutes, accelerating innovation at a pace that was once unimaginable. But that level of capability also raises the stakes: Securing data against evolving threats and compliance pitfalls.
This is where the AWS shared responsibility model comes into focus. The framework clearly defines how security and compliance responsibilities are divided between AWS and the customer. While AWS secures the underlying cloud infrastructure — including physical data centers, hardware, and core networking — customers are responsible for securing the data they place in the cloud. That means managing access, monitoring suspicious activity, encrypting sensitive data, and ensuring configurations align with security best practices.
Simply put, AWS provides a robust, secure foundation, but organizations must build their own digital fortresses on top of it. Recognizing and addressing these shared responsibilities is key to reducing risk, maintaining trust, and achieving true cloud resilience.
Common Data Security Challenges in AWS
Most cloud security issues aren’t caused by the cloud itself, but by how it’s configured. Gartner predicted that during 2026, 99% of cloud security failures would be the customer’s responsibility. In AWS, data security problems rarely come from the platform; they surface in the gaps between configuration, visibility, and governance. Spotting where those gaps tend to form is the first step toward closing them.
Misconfigured Amazon S3 buckets
Amazon Simple Storage Service (Amazon S3) is built to store and retrieve virtually unlimited data, from application backups and customer records to large-scale analytics datasets. It’s durable and scales with minimal operational effort, which is why it’s embedded in many architectures.
That flexibility, however, introduces risk. S3 buckets can be configured for public or private access, and when those controls aren’t carefully managed, sensitive data can be inadvertently exposed to the internet. Many well-known data breaches weren’t the result of sophisticated attacks; they came down to a simple bucket misconfiguration.
For security teams, the lesson goes beyond “locking down buckets.” It requires adopting continuous configuration monitoring and automated guardrails that identify risky settings before they reach production. In cloud environments where change is constant, proactive controls are essential.
Excessive or unmonitored access management
Identity and Access Management (IAM) is powerful because it lets teams move quickly. Developers can spin up resources, automate workflows, and collaborate across environments. But the very speed IAM enables can also introduce risk.
Today, 83% of cloud breaches involve an access-related component. And permissions rarely become excessive overnight. They accumulate quietly as organizations expand, projects multiply, and temporary access becomes permanent. Overly broad roles, dormant credentials, and service accounts with persistent privileges widen the attack surface, often without immediate visibility.
While the principle of least privilege is widely accepted in theory, it frequently breaks down in practice when operational demands outpace security reviews. What starts as a shortcut to keep work moving can quickly turn into persistent exposure.
Effective access management isn’t a one-time configuration but an ongoing discipline. It requires continuous auditing of entitlements, right-sizing permissions based on real usage patterns, and leveraging behavioral intelligence to detect anomalies early. In fast-changing AWS environments, resilience depends less on static policies and more on adaptive, intelligence-driven controls.
Limited visibility into sensitive data access and usage
In cloud security, control is impossible without visibility. Without it, even the most well-intentioned controls can fail. Within AWS environments, sensitive data often moves across services, regions, and workloads at a pace that outstrips manual oversight.
When organizations lack clarity into who is accessing sensitive data, when it’s being accessed, and for what purpose, risk accumulates silently. True visibility must extend beyond infrastructure into the data layer, where context matters most.
This requires more than basic logging. Security teams should prioritize classifying sensitive data, mapping how it moves across the environment, and correlating access patterns so anomalous behavior surfaces quickly rather than months later during an audit.
Sensitive data stored without proper encryption controls
Encryption is often assumed rather than verified. AWS provides robust native encryption capabilities, but they’re only effective when consistently applied and actively managed.
Data should be protected both at rest and in transit, with clear ownership over key management policies. Equally important is avoiding a fragmented approach where some datasets are rigorously secured while others fall through procedural cracks. Leading organizations enforce consistent encryption standards across all workloads, ensuring protection is systemic and not situational.
Inadequate monitoring and logging
AWS environments generate vast amounts of telemetry, but collecting logs alone is not enough. Without centralized monitoring and intelligent alerting, critical signals can easily get lost in the noise.
Effective logging does more than satisfy compliance requirements. It enables teams to reconstruct events, accelerate incident response, and maintain a proactive posture that identifies suspicious activity before it escalates into significant impact.
Lessons from Recent Breaches
Several high-profile incidents show that AWS breaches are rarely caused by flaws in AWS itself and almost always stem from how cloud resources are configured and governed. Misconfigured Amazon S3 buckets and overly permissive access controls have repeatedly exposed sensitive operational data, customer PII, and even source code.
In March 2025, health tech company ESHYFT accidentally left a massive amount of nurse data exposed in an unprotected AWS S3 bucket. The leak lasted for months and included everything from profile photos and facial images to scans of Social Security cards, driver’s licenses, professional certificates, CVs, and monthly work schedules. A spreadsheet with over 800,000 entries listing nurse IDs, facility names, shift times and dates, and total working hours was completely open for anyone to access.
Similarly, WorkComposer, a workplace tracking and productivity tool, accidentally leaked over 21 million screenshots from employee devices due to an unprotected S3 bucket. Just a few months earlier, WebWork, another time-tracking app, exposed more than 13 million screenshots, including sensitive emails and passwords.
The takeaway is clear: Cloud risk is often a problem of data visibility and configuration, not infrastructure. Organizations need to know exactly where sensitive data resides in AWS, how it’s classified, who can access it, and which misconfigurations could turn into their next high-profile breach.
Enhance AWS Data Security with Fortra DSPM
AWS security isn’t just about turning features on; it’s about knowing exactly what’s stored where, who can access it, and how it’s protected.
Fortra’s Data Protection and Security Management (DPSM) helps organizations find, classify, and protect sensitive data wherever it lives. It works alongside AWS’s native security capabilities, adding an extra layer of visibility, control, and automation while also helping businesses meet the requirements of AWS’s shared responsibility model.
By integrating Fortra DPSM, organizations gain a clearer picture of their data, reduce exposure windows, and strengthen confidence in their cloud operations.