As a cloud architect, it is important to understand the benefits of cloud computing.
Just-in-time Infrastructure
More efficient resource utilization
Usage-based costing
Reduced time to market
Auto-scaling
Proactive Scaling
More Efficient Development lifecycle
Improved Testability
Disaster Recovery and Business Continuity
Business Benefits of Cloud Computing
Almost zero upfront infrastructure investmentJust-in-time Infrastructure
More efficient resource utilization
Usage-based costing
Reduced time to market
Technical Benefits of Cloud Computing
Scriptable infrastructureAuto-scaling
Proactive Scaling
More Efficient Development lifecycle
Improved Testability
Disaster Recovery and Business Continuity
Cloud Best Practices
Design for failure
- Failover gracefully using Elastic IPs: Elastic IP is a static IP that is dynamically re-mappable. You can quickly remap and failover to another set of servers so that your traffic is routed to the new servers. It works great when you want to upgrade from old to new versions or in case of hardware failures
- Utilize multiple Availability Zones: Availability Zones are conceptually like logical datacenters. By deploying your architecture to multiple availability zones, you can ensure highly availability. Utilize Amazon RDS Multi-AZ deployment functionality to automatically replicate database updates across multiple Availability Zones.
- Maintain an Amazon Machine Image so that you can restore and clone environments very easily in a different Availability Zone; Maintain multiple Database slaves across Availability Zones and setup hot replication.
- Utilize Amazon CloudWatch (or various real-time open source monitoring tools) to get more visibility and take appropriate actions in case of hardware failure or performance degradation. Setup an Auto scaling group to maintain a fixed fleet size so that it replaces unhealthy Amazon EC2 instances by new ones.
- Utilize Amazon EBS and set up cron jobs so that incremental snapshots are automatically uploaded to Amazon S3 and data is persisted independent of your instances.
- Utilize Amazon RDS and set the retention period for backups, so that it can perform automated backups
Decouple your components
- Use Amazon SQS to isolate components
- Use Amazon SQS as buffers between components
- Design every component such that it expose a service interface and is responsible for its own scalability in all appropriate dimensions and interacts with other components asynchronously
- Bundle the logical construct of a component into an Amazon Machine Image so that it can be deployed more often
- Make your applications as stateless as possible. Store session state outside of component (in Amazon SimpleDB, if appropriate)
Implement Elasticity
Automate and bootstrap your instances
- Define Auto-scaling groups for different clusters using the Amazon Auto-scaling feature in Amazon EC2.
- Monitor your system metrics (CPU, Memory, Disk I/O, Network I/O) using Amazon CloudWatch and take appropriate actions (launching new AMIs dynamically using the Auto-scaling service) or send notifications.
- Store and retrieve machine configuration information dynamically: Utilize Amazon SimpleDB to fetch config data during boot-time of an instance (eg. database connection strings).
- SimpleDB may also be used to store information about an instance such as its IP address, machine name and role.
- Design a build process such that it dumps the latest builds to a bucket in Amazon S3; download the latest version of an application from during system startup.
- Invest in building resource management tools (Automated scripts, pre-configured images) or Use smart open source configuration management tools like Chef, Puppet, CFEngine or Genome.
- Bundle Just Enough Operating System (JeOS) and your software dependencies into an Amazon Machine Image so that it is easier to manage and maintain. Pass configuration files or parameters at launch time and retrieve user data23 and instance metadata after launch.
- Reduce bundling and launch time by booting from Amazon EBS volumes and attaching multiple Amazon EBS volumes to an instance.
- Create snapshots of common volumes and share snapshots among accounts wherever appropriate.
- Application components should not assume health or location of hardware it is running on. For example, dynamically attach the IP address of a new node to the cluster. Automatically failover and start a new clone in case of a failure.
Parallel
- Multi-thread your Amazon S3 requests
- Multi-thread your Amazon SimpleDB GET and BATCHPUT requests
- Create a JobFlow using the Amazon Elastic MapReduce Service for each of your daily batch processes (indexing, log analysis etc.) which will compute the job in parallel and save time.
- Use the Elastic Load Balancing service and spread your load across multiple web app servers dynamically
Data Placement -Keep dynamic data close to the compute and static data closer to end user
- Ship your data drives to Amazon using the Import/Export service.
- It may be cheaper and faster to move large amounts of data using the sneakernet28 than to upload using the Internet.
- Utilize the same Availability Zone to launch a cluster of machines
- Create a distribution of your Amazon S3 bucket and let Amazon CloudFront caches content in that bucket across all the edge locations around the world
I really found some great and interesting blog, which is knowledgeable. Thanks for sharing with us.
ReplyDeleteAWS Cloud Computing
Microsoft Cloud Computing
Cloud Computing SaaS