πŸš€ Exciting Day 6 of My AWS DevOps Engineer Professional Journey! πŸš€

πŸš€ Exciting Day 6 of My AWS DevOps Engineer Professional Journey! πŸš€

Β·

8 min read

Greetings, tech enthusiasts! Today unfolds another chapter in my AWS DevOps certification journey, and I'm thrilled to share the wealth of knowledge gained on Day 6 through StΓ©phane Maarek's Udemy course.

πŸ’‘ Course Progress - Day 6: Unveiling Amazon Kinesis, Route 53, Amazon RDS, Aurora, Elasticache, DynamoDB, AWS DMS, S3, and Storage Gateway!

As we continue our exploration of AWS services, let's delve into the diverse topics covered and gain insights for our cloud endeavors.


πŸ” Key Learnings


🌊 Amazon Kinesis

Amazon Kinesis is a fully managed service that ingests, buffers, and processes streaming data in real-time. With Kinesis, you can ingest real-time data, such as video, audio, application logs, website clickstreams, and IoT telemetry data, for machine learning (ML), analytics, and other applications. Kinesis Data Streams, Kinesis Data Firehose, and Kinesis Data Analytics are the three services that make up the Kinesis streaming data platform.

🌊 Kinesis Data Streams

Amazon Kinesis Data Streams is a serverless streaming data service that simplifies the capture, processing, and storage of data streams at any scale. Kinesis Data Streams enables you to collect and process large streams of data records in real-time. You can create data-processing applications, known as Kinesis Data Streams applications, that use the Kinesis Client Library and run on Amazon EC2 instances.

🌊 Kinesis Data Stream Consumer Scaling

To scale your Kinesis Data Stream consumers, you can use the enhanced fan-out feature. Enhanced fan-out allows multiple consumers to read data from the same stream in parallel, without contending for read throughput with other consumers. When a consumer uses enhanced fan-out, it gets its own 2 MB/sec allotment of read throughput, allowing multiple consumers to read data from the same stream in parallel.

πŸ”₯ Kinesis Data Firehose

Amazon Kinesis Data Firehose is an extract, transform, and load (ETL) service that reliably captures, transforms, and delivers streaming data to data lakes, data stores, and analytics services. Kinesis Data Firehose is a fully managed service that makes it easy to capture, transform, and load massive volumes of streaming data from hundreds of thousands of sources into Amazon S3, Amazon Redshift, Amazon OpenSearch Service, Kinesis Data Analytics, generic HTTP endpoints, and service providers like Datadog, New Relic, MongoDB, and Splunk.

πŸ“Š Kinesis Data Analytics

Amazon Kinesis Data Analytics is a fully managed service that enables you to process and analyze streaming data using standard SQL. With Kinesis Data Analytics, you can construct applications that transform and provide insights into your data. You can use Kinesis Data Analytics to generate time-series analytics, feed real-time dashboards, and create real-time metrics.

πŸ€– Kinesis Data Analytics Using Machine Learning

You can use Amazon Kinesis Data Analytics for SQL Applications to perform machine learning (ML) on your streaming data. Kinesis Data Analytics supports the use of user-defined functions (UDFs) to perform custom transformations on your data. You can use UDFs to perform ML operations on your data, such as training models and making predictions.


🌐 Route 53

Amazon Route 53 is a highly available and scalable cloud Domain Name System (DNS) web service that translates domain names to IP addresses. Route 53 can be used to route users to Internet applications by translating human-readable names like www.example.com into the numeric IP addresses like 192.0.2.1 that computers use to connect to each other.

🌐 Routing Policies

  1. Weighted Routing Policy: Weighted Routing Policy is a DNS routing policy that allows you to distribute traffic across multiple resources in proportions that you specify.

  2. Latency Routing Policy: Latency Routing Policy is a DNS routing policy that allows you to route traffic based on the lowest network latency for your end user.

  3. Failover in Routing Policy: Failover routing policy is a DNS routing policy that allows you to route traffic to a resource when the resource is healthy or to a different resource when the first resource is unhealthy. The primary and secondary records can route traffic to anything from an Amazon S3 bucket that is configured as a website to a complex tree of records. You can use failover routing policy for records in a private hosted zone.


πŸ’½ Amazon RDS

Amazon Relational Database Service (Amazon RDS) is a fully managed relational database service that makes it easy to set up, operate, and scale a relational database in the cloud. Amazon RDS provides cost-efficient and resizable capacity while automating time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups.

πŸ”„ Read Replicas in RDS

Amazon RDS Read Replicas are a way to scale reads in an RDS Database by creating multiple read-only, database instances of the primary database. RDS supports up to 15 read replicas per database instance and the read replicas can be located in the same availability zones, in different availability zones, or even in other regions. Every read replica can be promoted to its databases. However, the major limitation of read replicas is that it only supports select clause queries, which means all the database instances except the primary database instance will only support read operations. When data is written to the primary database instance it gets replicated to its read replicas by using asynchronous mode of replication.

πŸ”’ RDS Multi-AZ

Amazon RDS Multi-AZ deployments provide enhanced availability and durability for database instances within a single region. With Multi-AZ, Amazon RDS automatically provisions and maintains a synchronous standby replica in a different Availability Zone. In the event of a planned or unplanned outage of your primary instance, Amazon RDS automatically switches to the standby replica.

πŸ” Key Difference between Real Replicas and RDS Multi-AZ

The key difference between Read Replicas and Multi-AZ is that Read Replicas are used to scale reads, while Multi-AZ is used to provide enhanced availability and durability for database instances.


πŸ’Ž Amazon Aurora

Amazon Aurora is a MySQL and PostgreSQL-compatible relational database built for the cloud. Aurora combines the performance and availability of traditional enterprise databases with the simplicity and cost-effectiveness of open-source databases. Aurora is fully managed by Amazon Relational Database Service (RDS), which automates time-consuming administration tasks such as hardware provisioning, database setup, patching, and backups.


πŸš€ Amazon ElastiCache

Amazon ElastiCache is a fully managed in-memory data store and cache service that makes it easy to deploy, operate, and scale popular open source compatible in-memory data stores in the cloud. ElastiCache supports two open-source in-memory caching engines: Memcached and Redis.

πŸ”„ ElastiCache Cluster Mode

ElastiCache Cluster Mode is a feature that allows you to partition your data across multiple Redis shards. With Cluster Mode enabled, you can scale your Redis workloads beyond the memory and I/O limits of a single node.

When Cluster Mode is disabled, ElastiCache runs Redis in a single-node configuration. The key difference between Cluster Mode enabled and disabled is that with Cluster Mode enabled, you can scale your Redis workloads beyond the memory and I/O limits of a single node.


🌐 Amazon DynamoDB

Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. DynamoDB is designed to scale horizontally across multiple servers and geographic regions.

Advanced features of Amazon DynamoDB include:

  • DynamoDB Streams: A feature that captures a time-ordered sequence of item-level modifications in any DynamoDB table.

  • DynamoDB Global Tables: A feature that enables you to replicate DynamoDB tables across AWS Regions.

  • DynamoDB Accelerator (DAX): A fully managed, highly available, in-memory cache for DynamoDB that delivers up to a 10x performance improvement.


πŸ” AWS DMS

AWS Database Migration Service (AWS DMS) is a fully managed service that makes it easy to migrate relational databases, data warehouses, and NoSQL databases to AWS. You can use AWS DMS to migrate your data to and from the most widely used commercial and open-source databases.

πŸ“Š Monitoring AWS DMS

You can monitor AWS DMS using Amazon CloudWatch. CloudWatch provides metrics and alarms for monitoring the replication status, latency, and throughput of your AWS DMS tasks.


🌐 Amazon S3

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. S3 provides developers and IT teams with secure, durable, and highly scalable object storage. With S3, you can store and retrieve any amount of data from anywhere on the web.

πŸ”„ S3 Replication and How to Do It

Amazon S3 Replication is a feature that enables you to replicate objects across buckets in different AWS Regions. S3 Replication can be used to replicate objects between buckets in the same account or between buckets in different accounts. To set up S3 Replication, you can use the Amazon S3 Management Console or the AWS CLI. To set up S3 Replication using AWS Management Console you can follow these steps:

  1. Create an Amazon S3 bucket in the source region.

  2. Create an Amazon S3 bucket in the destination region.

  3. Enable versioning on both the source and destination buckets.

  4. Create an IAM role that Amazon S3 can assume to replicate objects on your behalf.

  5. Add a replication configuration to your source bucket that specifies the destination bucket and the IAM role.

  6. Configure the replication rule to replicate all objects or a subset of objects in the source bucket.

  7. Monitor the replication progress using Amazon S3 metrics and CloudWatch alarms.


πŸ’Ύ AWS Storage Gateway

AWS Storage Gateway is a hybrid cloud storage service that enables your on-premises applications to seamlessly use AWS cloud storage. Storage Gateway provides three types of gateways: File Gateway, Volume Gateway, and Tape Gateway. Each gateway type provides a different way to connect your on-premises applications to AWS cloud storage.

πŸ”„ File Gateway Cache Refresh

File Gateway Cache Refresh is a feature that enables you to refresh the cache for your Amazon S3 bucket. As your NFS or SMB client performs file system operations, your gateway maintains an inventory of the objects in the Amazon S3 object cache associated with your file share. Your gateway uses this cached inventory to reduce the latency and frequency of Amazon S3 requests. To refresh the S3 bucket object cache for your file share, you can use the Storage Gateway console or the Storage Gateway API.


✨ The Journey Continues: As Day 6 wraps up, I'm excited about the depth of understanding gained and the practical skills acquired. Stay tuned for more updates as my AWS DevOps journey continues to unfold!


If you have any doubts or suggestions or any questions let's connect on LinkedIn or Twitter(X).

Β