DP-900 Objective 3.1: Describe Capabilities of Azure Storage
DP-900 Exam Focus: This objective covers three core Azure storage servicesâAzure Blob storage for unstructured object storage with block, append, and page blobs and hot/cool/archive access tiers; Azure File storage for managed SMB/NFS file shares replacing traditional file servers; and Azure Table storage for NoSQL key-value data with partition and row keys enabling massive scale at low cost. Understanding capabilities, use cases, and choosing appropriate storage for different scenarios is essential for the exam.
Understanding Azure Storage Services
Azure Storage provides comprehensive cloud storage solutions for diverse data types and access patterns. The platform delivers durable, highly available, and massively scalable storage accessible globally via HTTP/HTTPS. Azure Storage includes multiple services optimized for specific scenarios: Blob storage for unstructured object data, File storage for managed file shares, Table storage for NoSQL structured data, Queue storage for messaging, and Disk storage for virtual machines. Understanding these services enables selecting appropriate storage for applications' specific requirements. This objective focuses on three core storage services: Blob, File, and Table storage, each serving distinct use cases from media files to shared application data to IoT telemetry.
Azure Storage services share common capabilities making them enterprise-ready. Durability through redundancy options protects data against hardware failures, disasters, or outages with configurations ranging from locally redundant storage (LRS) keeping three copies within single datacenter to geo-redundant storage (GRS) replicating across Azure regions hundreds of miles apart. Security includes encryption at rest by default, encryption in transit via HTTPS, and granular access controls through shared access signatures, Azure Active Directory integration, and network isolation. Scalability enables massive capacity growing from gigabytes to petabytes without infrastructure management. Global accessibility allows data access from anywhere via REST APIs, Azure SDKs, PowerShell, CLI, or Portal. Cost-effectiveness through pay-as-you-go pricing and storage tiers optimizes expenses. These shared characteristics combined with service-specific capabilities make Azure Storage comprehensive platform for diverse storage needs.
Azure Blob Storage
Core Capabilities and Features
Azure Blob storage is Microsoft's object storage solution for unstructured data in the cloud. Blob stands for Binary Large Object. Unlike file systems organizing data hierarchically in folders, or databases organizing data in tables, object storage treats data as individual objects (blobs) stored in flat namespace within containers. Each blob has unique identifier (URL) and metadata. Blobs can be any type of text or binary dataâdocuments, images, videos, backups, log files, or application data. Azure Blob storage scales massively supporting petabytes of data, provides multiple redundancy options ensuring durability, and offers REST APIs, Azure SDKs, and Azure CLI for programmatic access. PowerShell and Azure Portal provide management interfaces.
Key features include access tiers optimizing costs based on data access frequency; lifecycle management policies automating tier transitions and deletion based on age or last access; versioning maintaining previous blob versions enabling recovery from accidental modifications or deletions; soft delete recovering deleted blobs or containers within retention period; immutable storage preventing deletion or modification for compliance requirements; blob snapshots capturing point-in-time read-only copies; change feed logging all changes for auditing or replication; and Azure Data Lake Storage Gen2 adding hierarchical namespace to Blob storage for big data analytics. Security features include encryption at rest by default, private endpoints for network isolation, Azure Active Directory integration, shared access signatures providing time-limited delegated access, and access policies controlling permissions at container or blob level.
Blob Types
Azure Blob storage supports three blob types optimized for different scenarios. Block blobs store text and binary data composed of blocks that upload individually and assemble into blobs. They're the default blob type and most commonly used. Block blobs support up to approximately 190TB (4.75TB with 100MB blocks or 190TB with 4000MB blocks). They're ideal for storing documents, images, videos, and general-purpose files. Block blobs optimize for sequential upload and download operations. They support parallel uploads of blocks improving performance for large files. Once committed, blocks become immutable enabling efficient content-based deduplication. Block blobs support all access tiers (hot, cool, archive) and lifecycle management.
Append blobs optimize for append operations where data adds to blob's end without modifying existing data. They support up to 195GB. Append blobs are ideal for logging scenarios where applications continuously append dataâapplication logs, audit trails, streaming data capture. They don't support random writes or modifications to existing data, only appending to end. This constraint enables optimization for append-only workloads. Append blobs suit scenarios like virtual machine logging, database transaction logs, or telemetry data streaming. Page blobs store random access files up to 8TB organized in 512-byte pages. They optimize for frequent read/write operations at random locations within blob. Page blobs serve as disks for Azure Virtual Machinesâboth OS disks and data disks. While less commonly used directly by applications, page blobs are critical for IaaS workloads requiring disk storage. They support efficient updates to specific byte ranges without rewriting entire blobs.
Access Tiers and Lifecycle Management
Azure Blob storage provides access tiers optimizing costs based on how frequently data is accessed. Hot tier is designed for data accessed frequently or requiring low latency. It has highest storage costs but lowest access and transaction costs. Use hot tier for active data undergoing frequent reads or writes, website content and images served to users, data being actively processed, and short-term backup and disaster recovery. Hot tier provides optimal performance with lowest latency. Cool tier targets infrequently accessed data stored at least 30 days. It has lower storage costs than hot tier but higher access and transaction costs. Early deletion fees apply if data moves or deletes before 30 days. Use cool tier for short-term backup datasets, older media content not viewed frequently but available immediately when requested, and large datasets collected while gathering more data for future processing.
Archive tier provides lowest storage costs for rarely accessed data stored at least 180 days. It has highest access costs and rehydration latency. Data in archive tier is offlineâyou cannot read it directly. To access archived data, you must rehydrate it by copying to hot or cool tier, taking hours (standard rehydration) or less with high-priority rehydration at higher cost. Early deletion fees apply before 180 days. Use archive tier for long-term backups retained for compliance, original raw data preserved after processing, rarely accessed data that must be retained, and compliance/regulatory data with infrequent access requirements. Archive tier dramatically reduces storage costs for data you need to keep but rarely access. Lifecycle management policies automate tier transitions and deletions based on rules. For example, policies can move blobs to cool tier 30 days after last modification, archive after 90 days, and delete after 365 days, optimizing costs without manual intervention.
Use Cases and Best Practices
Azure Blob storage excels for numerous scenarios. Media storage and streaming serve images, audio, and video to web and mobile applications. Content delivery networks (CDNs) cache blob content at edge locations globally reducing latency. Backups and disaster recovery store database backups, virtual machine backups, and file backups durably. Blob storage's redundancy options and immutable storage support compliance requirements. Big data analytics store massive datasets for analysis by Azure HDInsight, Azure Databricks, Azure Synapse Analytics, or external tools. Data lakes using Azure Data Lake Storage Gen2 (built on Blob storage) provide hierarchical namespace optimizing big data workloads.
Document and file storage provide cloud storage for documents accessible from multiple locations. Static website hosting serves HTML, CSS, JavaScript, and images directly from Blob storage without web servers, ideal for single-page applications. Log and diagnostic data storage centrally stores application logs, server logs, and diagnostic data for analysis. Archive and compliance data stores data meeting regulatory retention requirements cost-effectively. Best practices include using appropriate blob typesâblock blobs for most scenarios, append blobs for logs, page blobs for disks; selecting access tiers matching usage patterns; implementing lifecycle policies automating tier transitions; enabling versioning and soft delete protecting against accidental data loss; using private endpoints and network isolation securing sensitive data; and organizing blobs with consistent naming and metadata schemes facilitating management and queries.
Azure File Storage
Core Capabilities and Features
Azure File storage provides fully managed file shares in the cloud accessible via industry-standard Server Message Block (SMB) or Network File System (NFS) protocols. Unlike Blob storage designed for object storage with REST API access, Azure Files presents traditional file system interface familiar to applications and users. File shares mount as network drives on Windows, Linux, and macOS, enabling applications to access Azure Files using standard file system APIs without code changes. This compatibility enables lift-and-shift scenarios moving applications requiring file shares to cloud. Azure Files supports SMB 3.0 (Windows, Linux, macOS) and SMB 2.1 (Windows), and NFS 4.1 (Linux, macOS) protocols.
Key features include hierarchical directory structure supporting folders and files like traditional file systems; concurrent access from multiple clients mounting file shares simultaneously; snapshots capturing point-in-time read-only copies of entire file shares for backup or recovery; soft delete retaining deleted files or shares within retention period protecting against accidental deletion; Azure File Sync enabling caching of Azure file shares on Windows Servers on-premises or in cloud for distributed access with cloud storage backend; encryption at rest and in transit securing data; Azure Active Directory Domain Services authentication for SMB shares providing identity-based access control; and REST API access enabling programmatic management alongside SMB/NFS access. Performance tiers include Standard using HDD-based storage and Premium using SSD-based storage for latency-sensitive workloads.
Deployment and Access
Deploying Azure Files involves creating storage account, then creating file shares within account. File shares support up to 100TB with Standard tier or 100TB with Premium tier. Once created, file shares mount on Windows using net use command or File Explorer, on Linux using mount command with CIFS utilities, and on macOS using Finder or mount command. Azure Portal provides mount instructions including authentication credentials. Mounted file shares appear as network drives enabling applications to read and write files using standard file I/O operations. Multiple clients mount shares simultaneously with appropriate share-level access controls determining permissions.
Azure Files supports two authentication methods. Storage account key provides full access to file shares within storage accountâsimple but grants broad access. Azure Active Directory Domain Services (Azure AD DS) authentication enables identity-based access control using existing AD credentials and supports granular NTFS permissions at file and directory levelâmore secure and manageable for enterprise scenarios. REST API access enables programmatic file operations for scenarios where mounting as network drive isn't suitable. APIs support operations like creating directories, uploading files, listing contents, and deleting files. Azure SDKs, PowerShell, and Azure CLI provide convenient interfaces to REST APIs. The dual access modelâfile system protocols and REST APIsâprovides flexibility supporting diverse scenarios from lift-and-shift migrations to modern cloud-native applications.
Azure File Sync
Azure File Sync extends Azure Files enabling caching of file shares on Windows Servers on-premises or in Azure. This hybrid solution centralizes file shares in Azure Files benefiting from cloud scalability and redundancy while providing local access performance. File Sync replicates file shares from Azure Files to one or more Windows Servers, maintaining synchronized copies. Users access files from local servers at LAN speeds while Azure Files serves as central cloud repository. Cloud tiering feature optionally keeps only recently accessed files cached locally, moving older files to Azure Files as stubs that transparently retrieve when accessed. This optimizes local storage usage while maintaining seamless access to all files.
Azure File Sync suits distributed enterprises with branch offices where each office has Windows Server caching central file share. This provides local performance without maintaining separate file servers or complex replication. Backup and disaster recovery scenarios backup local file servers to Azure Files ensuring off-site copies. Server consolidation migrates multiple local file servers to single Azure Files share with servers caching subsets locally. Development and testing scenarios provide centralized source code or application files with local caching. Deploying Azure File Sync requires installing Storage Sync Service in Azure and Sync Agent on Windows Servers, registering servers with Sync Service, and creating sync groups defining which file share synchronizes with which local paths. The solution handles conflicts, throttling, and resuming interrupted syncs automatically. Azure File Sync bridges on-premises and cloud enabling gradual cloud adoption while preserving local access performance.
Use Cases and Best Practices
Azure Files serves multiple scenarios. Lift-and-shift migrations enable moving applications requiring shared file storage to cloudâmany on-premises applications use network file shares for configuration, logs, or shared data. Shared application settings store configuration files accessible by multiple application instances running in VMs or containers. Container persistent storage mounts Azure Files as persistent volumes in Azure Kubernetes Service or Azure Container Instances preserving data across container restarts. Development tools sharing distributes development utilities, libraries, or build outputs across development team VMs. Diagnostic logs centralize log collection from multiple application instances to shared file share for processing.
Hybrid file shares with Azure File Sync provide distributed branch offices with local access to centrally managed files. Home directories and roaming profiles store user profile data in Azure Files accessible from different machines. Database and application backups store SQL Server or application backups using native file system backup tools. Media workflows store media files processed by rendering or encoding applications. Best practices include choosing appropriate performance tierâStandard for general-purpose, Premium for latency-sensitive workloads; implementing snapshots for backup and recovery; using Azure AD DS authentication for enterprise scenarios requiring granular permissions; planning network bandwidth for initial data uploads and ongoing sync traffic; monitoring performance metrics identifying bottlenecks; and organizing files with clear directory structures and naming conventions. For hybrid scenarios, Azure File Sync provides seamless bridge between on-premises and cloud maintaining local performance with cloud scalability.
Azure Table Storage
Core Capabilities and Features
Azure Table storage is a NoSQL key-value store for structured non-relational data providing massive scale at low cost. It stores structured data in schemaless design where each entity (row) in table can have different properties (columns). This flexibility suits evolving data models where entity structures change over time or vary by type. Tables contain entities identified by partition key and row key combination providing unique identifier. Unlike relational databases with fixed schemas, foreign keys, and joins, Table storage uses simple key-based access pattern optimized for massive scale and high throughput at low cost. Azure Table storage is part of Azure Storage account, sharing infrastructure with Blob and File storage.
Key features include schemaless design allowing flexible entity structures; massive scalability supporting terabytes of data and millions of entities per table; low cost optimizing for large-scale storage with simple access patterns; REST API access enabling language-agnostic programmatic operations; automatic indexing on partition and row keys providing fast lookups; eventually consistent replication improving availability; Azure Storage account integration sharing security, encryption, and management features with Blob and File storage; and backup and recovery through Azure Storage redundancy options. Table storage doesn't support complex queries, joins, or relationships. Queries primarily filter by partition and row keys. It provides fast access when querying by keys but slower performance for scans across partitions. Azure Cosmos DB Table API offers global distribution, lower latency, and more features while maintaining Table storage API compatibility at higher cost.
Partition Keys and Row Keys
Azure Table storage uses partition key and row key as two-part primary key uniquely identifying entities. Understanding these keys is fundamental to effective Table storage design. Partition key groups related entities into partitionsâphysical storage units. All entities with same partition key store together. Partition key choice critically impacts performance and scalability. Good partition keys distribute data evenly across many partitions enabling parallel processing and preventing hot partitions receiving disproportionate traffic. Azure Table storage can serve different partitions from different servers scaling horizontally. Poorly chosen partition keys create bottlenecks concentrating traffic on few partitions.
Row key uniquely identifies entities within partition. Combined, partition key and row key form unique identifier for each entity across entire table. The combination enables efficient lookupsâpoint queries specifying both partition and row keys are fastest operations, requiring single lookup. Querying by partition key only returns all entities in that partition (partition scan)âfaster than table scan but slower than point query. Queries not using keys require table scans across all partitions, being slowest. Design examples: for storing customer orders, use CustomerID as partition key (grouping orders per customer) and OrderID as row key (uniquely identifying orders). For IoT telemetry, use DeviceID as partition key (grouping readings per device) and timestamp as row key. For application logs, use application or server identifier as partition key and timestamp as row key. Good design distributes data evenly, matches query patterns for efficiency, and groups related entities for batch operations within partitions.
Data Modeling and Querying
Azure Table storage data modeling differs from relational database design. Without joins or relationships, you often denormalize data including all necessary information in entities. This redundancy trades storage for query efficiency and simplicity. Consider storing customer and order data: instead of separate normalized tables, you might store customer information with each order entity, or use different entity types in same table distinguished by property. Flexible schema allows different entity structures in same tableâcustomer entities have different properties than order entities. Common patterns include entity group transactions performing atomic batch operations on entities within same partition; table per entity type when entities are completely different; and single table with type discriminator when querying across types is needed.
Querying Azure Table storage uses REST API, Azure SDK methods, or TableQuery in .NET. Queries filter by partition key and/or row key using comparison operators. Query examples: retrieve specific entity with point query specifying both keys; retrieve all entities in partition specifying only partition key; retrieve range of entities using partition key with row key comparisons like "row key greater than X"; and retrieve entities across partitions using property filters though slower than partition-based queries. Azure Table storage doesn't support joins, aggregations, or complex filtering. Applications must implement these operations client-side if needed. Projection reduces data transfer by selecting specific properties rather than entire entities. Pagination handles large result sets returning results in pages. For complex querying needs, consider Azure Cosmos DB Table API offering same API with added capabilities like secondary indexes, or Azure Cosmos DB SQL API providing rich query language.
Use Cases and Best Practices
Azure Table storage suits specific scenarios leveraging its strengths. User data storage stores user profiles, preferences, or session data for web applications with simple lookup by user ID. IoT device data ingests high volumes of telemetry from devices with partition key by device ID enabling efficient queries per device. Application logs and telemetry store logs at massive scale with timestamp-based access. Metadata and catalogs store file metadata, product catalogs, or inventory data with key-based lookups. Gaming leaderboards track player scores with efficient rank queries. Queue-like patterns implement simple message queues using table storage with partition key representing queue and row key representing message ID.
Table storage excels when you need massive scale at low cost, have simple access patterns based on keys, can tolerate eventually consistent reads, and have flexible or evolving schemas. It's not suitable for complex relational queries, transactions across partitions, scenarios requiring immediate consistency, or frequent updates to same entities from many clients causing contention. Best practices include choosing partition keys distributing data evenly avoiding hot partitions; using partition keys matching primary query patterns; keeping entities within size limits (1MB per entity); batching operations within partitions for performance; monitoring partition usage identifying hotspots; considering Azure Cosmos DB Table API for global distribution or lower latency requirements; and documenting entity schemas despite flexibility. Azure Table storage provides cost-effective storage for scenarios matching its capabilities, offering excellent price-performance ratio for appropriate workloads.
Comparing Azure Storage Services
Service Comparison Matrix
Azure Storage Services Comparison:
- Data Type: Blob (unstructured binary/text), File (files and directories), Table (structured entities)
- Access Protocol: Blob (REST API, HTTPS), File (SMB, NFS, REST API), Table (REST API)
- Structure: Blob (flat namespace with containers), File (hierarchical directories), Table (key-value entities)
- Primary Use Cases: Blob (media, backups, big data), File (shared files, lift-and-shift), Table (IoT telemetry, logs, metadata)
- Access Pattern: Blob (streaming, sequential/random access), File (file system operations), Table (key-based lookups)
- Cost Optimization: Blob (access tiers: hot/cool/archive), File (performance tiers: Standard/Premium), Table (no tiers, inherently low cost)
- Maximum Size: Blob (190TB per blob), File (100TB per share), Table (no fixed limit, petabyte scale)
- Querying: Blob (list/enumerate operations), File (directory traversal), Table (partition/row key queries)
- Best For: Blob (object storage at scale), File (replacing file servers), Table (massive scale key-value)
Choosing the Right Storage Service
Selecting appropriate Azure storage service depends on data characteristics and access patterns. Choose Azure Blob storage for unstructured data like images, videos, documents, and backups; scenarios requiring content delivery to web browsers or mobile apps; big data analytics requiring massive dataset storage; long-term archival with infrequent access using archive tier; and when REST API access suffices without requiring file system semantics. Blob storage provides best cost optimization through access tiers and supports largest individual object sizes. Choose Azure Files for applications requiring file shares accessible as network drives; lift-and-shift migrations moving file-server-dependent applications; shared configuration files or logs accessed by multiple VMs or containers; scenarios requiring SMB or NFS protocol compatibility; and hybrid scenarios with Azure File Sync providing local caching with cloud backend.
Choose Azure Table storage for structured non-relational data with simple key-based access; scenarios requiring massive scale at lowest cost; IoT telemetry or sensor data with high ingestion rates; application logs or audit trails; and flexible schema requirements where entity structures evolve. Avoid Table storage for complex queries requiring joins or aggregations; scenarios requiring transactions across partitions; or immediate consistency requirements. Many applications use multiple storage servicesâBlob storage for media files, Files for shared configuration, and Table storage for metadata. Azure Storage accounts can contain blob containers, file shares, and tables, providing unified management and billing. Understanding each service's strengths and limitations enables architecting solutions optimally matching requirements to capabilities, balancing functionality, performance, and cost.
Real-World Azure Storage Scenarios
Scenario 1: Media Streaming Platform
Business Requirement: Video streaming service stores and delivers video content to millions of users globally with optimized costs for different video access patterns.
Azure Solution: Azure Blob Storage with Access Tiers and CDN
- Architecture: Video files store in Azure Blob storage as block blobs. Recently released popular content uses hot tier for frequent access. Older content with declining views transitions to cool tier after 30 days. Archived shows move to archive tier after 180 days but remain available for on-demand rehydration if users request.
- Content Delivery: Azure CDN caches popular videos at edge locations worldwide reducing latency and Blob storage egress costs. CDN serves cached content while occasionally fetching updated or new videos from Blob storage.
- Lifecycle Management: Automated lifecycle policies transition blobs between access tiers based on age and access patterns without manual intervention. This optimizes costsâhot tier for active content needing low latency, cool tier for less popular content, archive tier for compliance retention of old shows.
- Analytics: Raw video files and associated metadata store in Blob storage enabling Azure Media Services for transcoding, Azure Synapse Analytics for viewing analytics, and machine learning for content recommendations.
- Cost Optimization: Tiered storage dramatically reduces costs. Archive tier costs pennies per GB for rarely accessed content while hot tier provides performance for popular videos. Lifecycle automation eliminates manual tier management.
Outcome: Scalable video platform serving millions of users with optimized costs through intelligent tier usage and CDN caching, while maintaining access to entire content library.
Scenario 2: Enterprise File Share Migration
Business Requirement: Enterprise with offices worldwide needs to modernize on-premises file servers providing centralized management, disaster recovery, and local access performance.
Azure Solution: Azure Files with Azure File Sync
- Architecture: Azure Files hosts centralized file shares in cloud. Windows Servers in each branch office run Azure File Sync agent caching Azure file shares locally. Users access files from local servers at LAN speeds while Azure Files serves as single source of truth.
- Cloud Tiering: File Sync cloud tiering keeps only frequently accessed files cached locally, moving older files to Azure Files as stubs. When users access stubbed files, they transparently retrieve from Azure Files. This optimizes local storage without user impact.
- Disaster Recovery: All files exist in Azure Files with geo-redundant storage replicating to secondary region. If branch office experiences disaster, users access Azure Files directly or Sync agents deploy to new servers recovering quickly.
- Management: IT manages single file share in Azure Files rather than multiple branch file servers. Updates, security policies, and backups centralize. Snapshots provide point-in-time recovery. Soft delete protects against accidental deletions.
- Collaboration: Employees in different offices work on shared files simultaneously. File Sync ensures changes replicate between locations. Azure Files handles conflicts and concurrent access.
Outcome: Modernized file infrastructure eliminating hardware refresh cycles, simplifying management, improving disaster recovery, and maintaining local performance through intelligent caching.
Scenario 3: IoT Sensor Data Platform
Business Requirement: Manufacturing company collects telemetry from thousands of sensors on production equipment requiring high-throughput ingestion, efficient per-device queries, and long-term retention at low cost.
Azure Solution: Azure Table Storage for Telemetry with Blob Storage for Batch Analytics
- Ingestion Architecture: Sensors send telemetry to Azure Event Hubs providing high-throughput ingestion buffer. Stream processing (Azure Stream Analytics or Functions) reads from Event Hubs, formats data, and writes to Azure Table storage for operational queries.
- Table Storage Design: Partition key uses DeviceID grouping all readings from each sensor together enabling efficient per-device queries. Row key uses timestamp (in reverse order for recent-first queries) uniquely identifying readings. This design distributes data evenly across partitions as thousands of devices write simultaneously.
- Operational Queries: Dashboard queries recent readings per device using partition key (DeviceID) with row key range (time range) efficiently retrieving device history without table scans. Alerts query specific devices when anomalies detected.
- Cost Optimization: Table storage's low cost handles billions of telemetry records economically. At roughly 10% of Azure SQL Database costs for similar structured data, Table storage optimizes expenses for massive-scale ingestion.
- Batch Analytics: Periodic export from Table storage to Blob storage (Parquet format) enables batch analytics with Azure Synapse Analytics or Databricks analyzing historical patterns, training machine learning models for predictive maintenance, and generating business intelligence reports.
Outcome: Scalable IoT platform ingesting millions of events per second, providing operational device queries, maintaining long-term history, and enabling batch analytics, all at optimized costs leveraging Table storage's strengths.
Exam Preparation Tips
Key Concepts to Master
- Azure Blob storage: Object storage for unstructured data, block/append/page blobs, hot/cool/archive tiers
- Blob access tiers: Hot (frequent access), cool (30+ days), archive (180+ days, offline)
- Blob use cases: Media storage, backups, big data, static websites, document storage
- Azure Files: Managed file shares, SMB/NFS protocols, mount as network drives
- Azure File Sync: Hybrid solution caching file shares on Windows Servers
- File storage use cases: Lift-and-shift, shared configuration, container persistent volumes
- Azure Table storage: NoSQL key-value store, partition key + row key, massive scale, low cost
- Table storage use cases: IoT telemetry, user profiles, application logs, metadata
- Comparison: Blob for objects, Files for file shares, Table for structured key-value data
Practice Questions
Sample DP-900 Exam Questions:
- Question: Which Azure storage service should be used for storing video files that will be streamed to users?
- A) Azure File storage
- B) Azure Table storage
- C) Azure Blob storage
- D) Azure Queue storage
Answer: C) Azure Blob storage - Blob storage is designed for unstructured data like videos, images, and media files.
- Question: Which access tier in Azure Blob storage has the lowest storage cost but highest access cost and latency?
- A) Hot tier
- B) Cool tier
- C) Archive tier
- D) Premium tier
Answer: C) Archive tier - Archive tier provides lowest storage costs for rarely accessed data with offline access requiring hours of rehydration.
- Question: Which Azure storage service provides managed file shares accessible via SMB protocol?
- A) Azure Blob storage
- B) Azure File storage
- C) Azure Disk storage
- D) Azure Table storage
Answer: B) Azure File storage - Azure Files provides managed file shares accessible via SMB and NFS protocols.
- Question: What is the primary purpose of Azure File Sync?
- A) Encrypting file shares
- B) Compressing files to save storage
- C) Backing up files automatically
- D) Caching Azure file shares on Windows Servers
Answer: D) Caching Azure file shares on Windows Servers - Azure File Sync caches file shares on on-premises or Azure Windows Servers providing local access with cloud storage backend.
- Question: In Azure Table storage, what combination uniquely identifies an entity?
- A) Table name and timestamp
- B) Partition key and row key
- C) Entity ID and table name
- D) Primary key and foreign key
Answer: B) Partition key and row key - The combination of partition key and row key uniquely identifies each entity in Azure Table storage.
- Question: Which Azure storage service is best for storing IoT sensor telemetry requiring millions of writes per second at low cost?
- A) Azure Blob storage
- B) Azure File storage
- C) Azure SQL Database
- D) Azure Table storage
Answer: D) Azure Table storage - Table storage provides massive scale at low cost ideal for high-throughput IoT telemetry with simple key-based access.
- Question: Which blob type is optimized for random read/write operations and used for virtual machine disks?
- A) Block blobs
- B) Append blobs
- C) Page blobs
- D) Archive blobs
Answer: C) Page blobs - Page blobs optimize for random access and serve as disks for Azure Virtual Machines.
- Question: What is a key characteristic that distinguishes Azure File storage from Azure Blob storage?
- A) File storage encrypts data while Blob storage doesn't
- B) File storage provides hierarchical directory structure accessible via SMB/NFS
- C) File storage is more expensive than Blob storage
- D) File storage only works with Windows
Answer: B) File storage provides hierarchical directory structure accessible via SMB/NFS - Azure Files presents traditional file system interface while Blob storage uses flat object namespace with REST API access.
DP-900 Success Tip: Remember Azure Blob storage stores unstructured objects (images, videos, backups) with access tiers (hot for frequent access, cool for 30+ days, archive for 180+ days offline) and blob types (block for files, append for logs, page for VM disks). Azure Files provides managed file shares accessible via SMB/NFS protocols mounting as network drives, ideal for lift-and-shift and shared configuration. Azure Table storage offers NoSQL key-value storage with partition key and row key for massive scale at low cost, perfect for IoT telemetry and application logs. Choose based on data type, access pattern, and protocol requirements.
Hands-On Practice Lab
Lab Objective
Explore Azure storage services by creating and using Blob storage with different access tiers, Azure Files with mounting, and Table storage with entities, understanding capabilities and appropriate use cases for each service.
Lab Activities
Activity 1: Create and Use Azure Blob Storage
- Create storage account: Navigate Azure Portal, create storage account with unique name, select region and redundancy
- Create container: Within storage account, create blob container with private access level
- Upload files: Upload sample files (images, documents) to container using Portal
- Set access tier: Change blob access tier from hot to cool, note pricing differences
- Generate SAS token: Create shared access signature providing time-limited access to blob
- Access blob: Use SAS URL to access blob from browser or REST API tool
- Explore features: Review lifecycle management policies, versioning, soft delete options
Activity 2: Compare Blob Types and Tiers
- Block blobs: Upload document or image as block blob, note it's default type
- Append blobs: Create append blob, simulate log scenario appending multiple entries
- Access tiers: Create blobs in hot tier, move to cool tier, document cost differences from pricing calculator
- Archive tier: Move blob to archive tier, attempt access (note it's offline), initiate rehydration
- Lifecycle policies: Create policy moving blobs to cool after 30 days without access, archive after 90 days
- Document use cases: For each blob type and tier, document appropriate scenarios
Activity 3: Create and Mount Azure File Share
- Create file share: In storage account, create file share with appropriate quota
- Upload files: Upload files and create directory structure in file share
- Get mount instructions: Portal provides connection instructions for Windows, Linux, macOS
- Mount share (if possible): On local machine or Azure VM, mount file share as network drive
- Test file operations: Create, modify, delete files through mounted drive using standard file operations
- Create snapshot: Take snapshot of file share, modify files, restore from snapshot
- Compare with Blob: Note differenceâFiles use file system interface, Blobs use object storage
Activity 4: Work with Azure Table Storage
- Create table: In storage account, create table for storing sample data
- Design entities: Plan partition key and row key structure for sample scenario (users, devices, logs)
- Insert entities: Use Azure Storage Explorer or SDK to insert sample entities with various properties
- Query by keys: Perform point query with partition and row key, partition scan with only partition key
- Flexible schema: Insert entities with different properties demonstrating schemaless design
- Observe performance: Compare query speeds: point query (fastest), partition scan, table scan (slowest)
- Document patterns: Note how partition key choice affects scalability and query efficiency
Activity 5: Compare Storage Services
- Create comparison table: Document Blob, File, Table characteristics side-by-side
- Access methods: Blob (REST API), File (SMB/NFS mount), Table (REST API with key queries)
- Structure: Blob (flat containers), File (hierarchical directories), Table (key-value entities)
- Cost comparison: Use pricing calculator comparing costs for equivalent data amounts
- Use case matching: For sample scenarios, identify most appropriate storage service with justification
- Integration: Note how services work togetherâBlob for media, Files for config, Table for metadata
Activity 6: Explore Storage Security and Management
- Access control: Configure container access levels (private, blob, container) in Blob storage
- Shared access signatures: Generate SAS tokens with different permissions and expiration times
- Encryption: Review encryption at rest (always enabled), configure encryption in transit requirements
- Monitoring: Enable diagnostic logging, view metrics in Azure Monitor
- Cost management: Review storage account metrics, estimate costs with pricing calculator
- Redundancy: Compare redundancy options (LRS, GRS, ZRS) and their use cases
Lab Outcomes
After completing this lab, you'll understand Azure Blob storage for unstructured object storage with access tiers and blob types, Azure Files for managed file shares with SMB/NFS access and mounting capabilities, and Azure Table storage for NoSQL key-value data with partition/row keys. You'll recognize appropriate use cases for each service and understand how they differ in access methods, structure, and cost. This hands-on experience demonstrates Azure storage capabilities tested in DP-900 exam and provides practical foundation for working with Azure storage services.
Frequently Asked Questions
What is Azure Blob storage and what are its primary use cases?
Azure Blob storage is a massively scalable object storage service for unstructured data including text, binary data, images, videos, documents, and backups. Blob stands for Binary Large Object. Azure Blob storage stores data as objects in containers accessible via HTTP/HTTPS using REST APIs, Azure SDKs, PowerShell, Azure CLI, or Azure Portal. Primary use cases include serving images and documents to browsers for websites, streaming video and audio content, storing files for distributed access, writing log files and diagnostic data, storing data for backup and disaster recovery, archiving data for compliance and long-term retention, storing data for analysis by Azure or on-premises services, and storing virtual machine disk images as page blobs for Azure Virtual Machines. Blob storage supports three blob types: block blobs for text and binary data up to 190TB ideal for documents and media files; append blobs optimized for append operations like logging; and page blobs for random access files up to 8TB used for virtual machine disks. Access tiers optimize costs: hot tier for frequently accessed data, cool tier for infrequently accessed data stored at least 30 days with lower storage costs but higher access costs, and archive tier for rarely accessed data stored at least 180 days with lowest storage costs but hours of rehydration time and higher retrieval costs. Blob storage provides durability through redundancy options, security through encryption and access controls, and scalability handling petabytes of data.
What are the different blob types in Azure Blob storage?
Azure Blob storage supports three blob types optimized for different scenarios. Block blobs store text and binary data composed of blocks uploaded individually and assembled into blobs, supporting up to 190TB per blob. Block blobs are ideal for storing documents, images, videos, and general-purpose files. They're the default blob type and most commonly used. Append blobs are optimized for append operations where data adds to end of blob without modifying existing data, ideal for logging scenarios where applications continuously append data. Append blobs support up to 195GB. They don't support random writesâonly appending to endâmaking them efficient for scenarios like application logging, audit logging, or streaming data capture. Page blobs store random access files up to 8TB, optimized for frequent read/write operations. They're divided into 512-byte pages enabling random access and efficient updates to specific byte ranges. Page blobs serve as disks for Azure Virtual Machines (both OS and data disks). They're less commonly used directly by applications but critical for IaaS workloads. Choosing appropriate blob type depends on access patterns: block blobs for general file storage and media, append blobs for logging and streaming, page blobs for virtual machine disks requiring random access. Block blobs support lifecycle management policies transitioning data between access tiers based on age or last access time, optimizing storage costs automatically.
What are Azure Blob storage access tiers and when should each be used?
Azure Blob storage provides access tiers optimizing costs based on data access frequency. Hot tier is designed for data accessed frequently or requiring low latency, with highest storage costs but lowest access costs. Use hot tier for active data, website content, images served to users, data undergoing active processing, and short-term backup and disaster recovery. Cool tier is for infrequently accessed data stored at least 30 days, with lower storage costs than hot tier but higher access and transaction costs. Early deletion fees apply if data is deleted or moved before 30 days. Use cool tier for short-term backup and disaster recovery datasets, older media content not viewed frequently but expected to be available immediately when accessed, and large datasets stored while collecting more data for processing. Archive tier provides lowest storage costs for rarely accessed data stored at least 180 days, with highest access costs and retrieval latency. Data in archive tier is offline requiring rehydration (copying to hot or cool tier) before access, taking hours. Early deletion fees apply before 180 days. Use archive tier for long-term backup, compliance and archival data, original raw data preserved after processing, and data accessed rarely but must be retained for compliance. Lifecycle management policies automatically transition blobs between tiers based on rules like moving to cool tier after 30 days without access, or archive after 90 days, optimizing costs without manual intervention. Access tier choice balances storage costs against access frequency and latency requirements.
What is Azure File storage and how does it differ from Blob storage?
Azure File storage provides fully managed file shares in the cloud accessible via Server Message Block (SMB) or Network File System (NFS) protocols. Unlike Blob storage designed for object storage with REST API access, Azure Files presents traditional file system interface that applications and users access like network drives. File shares can be mounted concurrently by multiple cloud or on-premises deployments of Windows, Linux, and macOS. Azure Files enables lift-and-shift of applications requiring file shares to cloud without code changes, since applications access Azure Files using standard file system APIs. Key differences from Blob storage: Azure Files uses hierarchical directory structure like traditional file systems while Blob storage has flat namespace with virtual directories; Azure Files mounts as network drives using SMB/NFS while Blob storage accesses via REST APIs; Azure Files serves use cases requiring file system semantics like shared application settings, diagnostic logs, development tools, and lift-and-shift scenarios, while Blob storage serves object storage needs like media files, backups, and big data. Azure Files supports both SMB 3.0 (Windows, Linux, macOS) and NFS 4.1 (Linux, macOS) protocols. Features include snapshots for point-in-time restore, soft delete for accidental deletion protection, and Azure File Sync enabling caching of file shares on Windows Servers on-premises for distributed access. Use Azure Files when applications need file system interface, for replacing or supplementing on-premises file servers, or for sharing application data across multiple VMs. Use Blob storage for large-scale object storage, streaming, backups, and scenarios where REST API access suffices.
What are common use cases for Azure File storage?
Azure File storage serves several common scenarios leveraging managed file share capabilities. Lift-and-shift migrations enable moving applications requiring shared file storage to cloud without modifications. Many on-premises applications use network file sharesâAzure Files provides compatible replacement. Configuration files shared across application instances store application settings, configuration, and shared data accessible by multiple VMs or container instances. Diagnostic logs centralize logging where multiple application instances write logs to shared file share for centralized processing and analysis. Development tools and utilities shared across development teams store shared development tools, libraries, or utilities accessible by development VMs or containers. Home directories and profile storage provide network home directories for users or roaming profiles. Hybrid scenarios with Azure File Sync enable caching Azure file shares on on-premises Windows Servers, providing local access with cloud storage backend. This suits scenarios like distributed branch offices with local caching but centralized storage. Container persistent volumes in Azure Kubernetes Service (AKS) or Azure Container Instances use Azure Files for persistent storage mounted to containers. Media workloads store media files accessible by rendering or processing applications. Database backups store SQL Server backups to Azure Files using native backup to URL. Compared to Azure Blob storage, Azure Files suits scenarios needing file system semantics, SMB/NFS protocol access, mounting as network drives, and lift-and-shift compatibility. The managed nature eliminates file server maintenanceâno patching, backups infrastructure, or high availability configuration required. Azure Files handles redundancy, encryption, and scaling automatically.
What is Azure Table storage and when should it be used?
Azure Table storage is a NoSQL key-value store for semi-structured data providing massive scale at low cost. It stores structured non-relational data in schemaless design, meaning each entity (row) in table can have different properties (columns). This flexibility suits evolving data models. Tables contain entities identified by partition key and row key combination. Partition key groups related entities enabling efficient querying and scalability, while row key uniquely identifies entities within partition. The combination provides unique identifier for each entity. Azure Table storage suits scenarios requiring massive scale (terabytes of data), flexible schemas where entity structures evolve, low-cost storage for large volumes of structured data, simple key-value lookups without complex queries or joins, and applications tolerating eventual consistency. Common use cases include storing user data for web applications like user profiles or preferences, storing device data for IoT applications with high ingestion rates, storing application logs or telemetry at scale, and storing metadata or catalog information. Azure Table storage doesn't support complex queries, joins, or relationships like relational databases. Queries primarily filter by partition and row keys. It provides fast access when querying by keys but slower performance for scans or queries across partitions. Azure Table storage uses eventually consistent replication improving availability but meaning reads might not immediately reflect recent writes. Cost-effectiveness makes Azure Table storage attractive for scenarios where relational database capabilities aren't required. However, for complex querying or relational data, Azure SQL Database or Cosmos DB Table API (offering global distribution and lower latency) might be more appropriate despite higher costs.
How do partition keys and row keys work in Azure Table storage?
Azure Table storage uses partition keys and row keys as two-part primary key identifying entities uniquely and enabling scalability. Partition key groups related entities into partitions (physical storage units). All entities with same partition key store together enabling efficient querying and atomic batch operations across entities in same partition. Partition key choice critically impacts performance and scalability. Good partition keys distribute data evenly across many partitions enabling parallel processing and preventing hot partitions receiving disproportionate traffic. For example, storing customer orders might use CustomerID as partition key grouping orders per customer. Row key uniquely identifies entities within partition. Combined, partition key and row key form unique identifier for each entity in table. Together they enable efficient lookupsâquerying by partition key and row key is fastest operation in Table storage using point query. Querying only by partition key returns all entities in that partition (partition scan). Querying without keys requires table scan across all partitions, being slowest. Design considerations include choosing partition keys distributing data evenly, using partition keys matching query patterns for efficient queries, considering partition key allowing related entities to group together for batch operations, and ensuring unique row keys within each partition. For example, IoT telemetry might use DeviceID as partition key (grouping sensor readings per device) and timestamp as row key (uniquely identifying readings). Bad partition key choices include single partition key for all entities (creating scalability bottleneck) or too fine-grained keys creating excessive small partitions. Understanding partition and row key design is crucial for Azure Table storage performance and scalability.
What are the main differences between Azure Blob, File, and Table storage?
Azure Blob, File, and Table storage serve different purposes with distinct characteristics. Azure Blob storage is object storage for unstructured data like images, videos, backups, and logs accessed via REST API. It uses flat namespace with containers holding blobs, provides access tiers (hot, cool, archive) for cost optimization, supports massive scale up to petabytes, and suits scenarios like media storage, backups, big data analytics, and serving content to browsers. Azure File storage provides managed file shares accessible via SMB/NFS protocols presenting hierarchical file system that mounts as network drives. It suits lift-and-shift migrations requiring file shares, shared configuration files, development tools, logs, and hybrid scenarios with Azure File Sync. It provides file system semantics with directories and files familiar to traditional applications. Azure Table storage is NoSQL key-value store for structured non-relational data with flexible schema. It uses partition key and row key for entity identification, provides massive scale at lowest cost among the three, suits scenarios like user profiles, IoT device data, application logs, and metadata storage where simple key lookups suffice without complex queries. Key differentiators: Blob storage optimizes for large objects and streaming; File storage provides file system compatibility for applications expecting network drives; Table storage optimizes for structured data at massive scale with simple access patterns. Access methods differ: Blob uses REST APIs; File uses SMB/NFS protocols; Table uses REST APIs with key-based queries. Cost generally increases from Table (cheapest) to Blob to File for equivalent storage. Choose based on data type, access patterns, required protocols, and whether relational capabilities, file system semantics, or massive scale at low cost matters most for your scenario.
Written by Joe De Coppi - Last Updated November 14, 2025