admin, Author at DevOpSpace https://devopspace.com/ae/author/admin/ FINITE Solutions, INFINITE Possibilities Fri, 05 Aug 2022 04:15:17 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 AWS re-Invent 2020 Announcements https://devopspace.com/ae/aws-re-invent-2020-announcements/ https://devopspace.com/ae/aws-re-invent-2020-announcements/#respond Sat, 24 Jul 2021 05:00:30 +0000 https://demo.casethemes.net/itfirm/?p=231 With over a decade experience, we’ve established ourselves as one of the pioneering agencies in the region. We understand the importance of approaching each work integrally and believe in the power of simple and easy communication.

The post AWS re-Invent 2020 Announcements appeared first on DevOpSpace.

]]>
AWS since the start for its first time, is having completely virtual edition of re-Invent. This blog will act as a quick read to the announcements of AWS re-Invent 2020. Our expert team always tries to keep this blog updated in the next 18 days with new announcements and features coming in AWS re-Invent 2020.

Mac on EC2: AWS EC2 starts supporting MAC OS on cloud, the first of its kind. This can be used by the development teams focused on iOS apps to build, test, package and sign Xcode apps.

1 ms Billing Feature on Lambda: AWS Lambda is billed against 1 millisecond going forward with no minimum execution time. While this might not seem great for everyone, its significant for those workloads which execute within 100 ms.

S3 Multi Destination Buckets Replication: Adding to the already available S3 SRR & CRR (Same Region Replication & Cross Region Replication), AWS adds support to replicate data to multiple buckets in conjunction with above replication types.

DevOps Guru: A part of ML services on AWS cloud, DevOps Guru is intended to identify potential operational issues and recommending fixes which will affect the application availability.

S3 Strong Read-After-Write Consistency: AWS S3 eventual consistency model terminates after 14 years of service. Here comes the strong consistency model for AWS S3, which will ensure what you write is immediately available for read as well.

AWS EKS Anywhere: Run Kubernetes at your on-premise and operate using AWS EKS Anywhere feature, releasing in the next year. EKS Anywhere provides an installable software package for creating and operating Kubernetes clusters on-premises and automation tooling for cluster lifecycle support.

AWS Proton: Fully managed application deployment service for container and serverless apps.

Amazon Lookout for Equipment: Upcoming service to find out abnormal behavior, identify machine failure before it happens by analyzing the data.

Amazon Aurora Serverless v2: Serverless database which can scale up to hundreds of thousands of requests within seconds. No database storage or server to deploy and manage.

The post AWS re-Invent 2020 Announcements appeared first on DevOpSpace.

]]>
https://devopspace.com/ae/aws-re-invent-2020-announcements/feed/ 0
MAC on AWS EC2 https://devopspace.com/ae/mac-on-aws-ec2/ https://devopspace.com/ae/mac-on-aws-ec2/#respond Sat, 24 Jul 2021 05:00:01 +0000 https://demo.casethemes.net/itfirm/?p=229 With over a decade experience, we’ve established ourselves as one of the pioneering agencies in the region. We understand the importance of approaching each work integrally and believe in the power of simple and easy communication.

The post MAC on AWS EC2 appeared first on DevOpSpace.

]]>
AWS re-Invent 2020 started, and so the exciting announcements too!!! Happy news to all the developers and start-ups focused on iOS app development. Going forward, you need not spend on purchasing latest and expensive Apple desktops for developing and/or testing your iOS apps.

AWS introduces the first of its kind – Apple desktop environment on cloud today, powered by Mac mini hardware and AWS Nitro systems. This can be used to build, test, package, and sign Xcode apps. This is packaged with 12 vCPU, 32 GB RAM hardware powered either with Mac OS Mojave (10.14) or Catalina (10.15).

The machine is currently available at North Virginia, Ohio, Oregon, Ireland, and Singapore regions. The machine will be spun up as dedicated host, which ensures that you have the complete transparency and control over the physical host. Once deployed, you access the VM running on this host over the normal SSH or tunnel VNC traffic over SSH.

Commercially, you pay for the hours you use the machine with a minimum tenancy of 24 hours for each dedicated host you purchase. This means that you now have the flexibility to have a dedicated MAC machine on cloud with no heavy investment. While AWS just launched the MAC machine on cloud, we expect that AWS will add up more features & specifications to this category. Reach out to our expert team if you are having a cloud roadmap for your organization or customer.

The post MAC on AWS EC2 appeared first on DevOpSpace.

]]>
https://devopspace.com/ae/mac-on-aws-ec2/feed/ 0
RabbitMQ Cluster Mode https://devopspace.com/ae/rabbitmq-cluster-mode/ https://devopspace.com/ae/rabbitmq-cluster-mode/#respond Sat, 24 Jul 2021 04:59:43 +0000 https://demo.casethemes.net/itfirm/?p=224 With over a decade experience, we’ve established ourselves as one of the pioneering agencies in the region. We understand the importance of approaching each work integrally and believe in the power of simple and easy communication.

The post RabbitMQ Cluster Mode appeared first on DevOpSpace.

]]>
RabbitMQ Introduction:

RabbitMQ as the name denotes its the Message-Queuing system also known as a message broker or queue manager. To simplify the definition, it is a piece of software where queues are defined & configured, applications connect to these queues to publish or consume a message or multiple messages.

A message can vary from a simple text message, binary information to the information about an individual process that can even be running on a different server.
Queues majorly work on the consumer & producer model, having said that, a message can be published to the queue (a.k.a producer) to process by a receiving application (a.k.a consumer). The queue-manager software retains the messages in the system until a consumer consumes the message & process it.

To put them all together, a message broker can be defined as a middle-man that retains multiple messages produced by applications, to process at the consumer end, depending on the availability of producers & consumers.

For example: Consider having 5 producers, each producing 1 message every second & there are only 3consumers where each can consume only 1 message every second. In this scenario, the other 2messages would go unprocessed by the consumers. Here comes the middle-man message-broker that stores all the messages produced by the producers & helps the consumer to consume the same based on the availability. This makes sure that no messages are lost during the process. Also, queues by default follow the first in-first out strategy so that all messages are processed seamlessly. 

RabbitMQ(RMQ) Cluster Mode:

A RabbitMQ cluster is a logical grouping of one or several nodes, each sharing users, virtual hosts, queues, exchanges, bindings, runtime parameters, and other distributed state. For the high availability of the rabbitMQ queues, the infra should be highly available. This is accomplished by creating a minimum of 3 node RMQ cluster. A minimum of 3 nodes is suggested to retain quorum among the servers.

Unlike other cluster models, RMQ cluster doesn’t segregate any node as master & slave. All RabbitMQ brokers when started initially, start out as running on a single node. Preferably, these nodes could be joined into clusters, and subsequently turned back into individual brokers again. All data/state required for the operation of a RabbitMQ broker is replicated across all nodes. An exception to this is message queues, which by default reside on one node, though they are visible and reachable from all nodes.

Downloading & Installing RabbitMQ can be followed at the official RMQ site:

Step I: RabbitMQ cluster creation:

Considering we have 3 nodes for the RabbitMQ cluster, with RMQ version 3.8.3. Naming the nodes as Node1, Node2, Node3 for our reference. The nodes within an RMQ cluster should be able to resolve each other with their domain names, either short or fully-qualified (FQDNs). Every node of the RMQ cluster must resolve the hostnames of all cluster members from each cluster node.

Let’s start all the rabbitMQ servers separately & then combine them to form a cluster.

On all nodes:

  1. To ensure rabbitMQ is running in all 3 nodes, restart the service with below command:
    sudo service rabbitmq-server restart
  2. Enable rabbitMQ-server service via chkconfig:
    sudo chkconfig rabbitmq-server on
  3. Verify the rabbit-server service is running on all the three nodes (node1,2&3) by checking the status:
    sudo rabbitmqctl status
    The output of rabbitmqctl status:

The post RabbitMQ Cluster Mode appeared first on DevOpSpace.

]]>
https://devopspace.com/ae/rabbitmq-cluster-mode/feed/ 0
Enforce AWS Tags While Creating AWS Resources https://devopspace.com/ae/enforce-aws-tags-while-creating-aws-resources/ https://devopspace.com/ae/enforce-aws-tags-while-creating-aws-resources/#respond Sat, 24 Jul 2021 04:57:13 +0000 https://demo.casethemes.net/itfirm/?p=221 With over a decade experience, we’ve established ourselves as one of the pioneering agencies in the region. We understand the importance of approaching each work integrally and believe in the power of simple and easy communication.

The post Enforce AWS Tags While Creating AWS Resources appeared first on DevOpSpace.

]]>
The infrastructure team at DevOpSpace always tries to prove that they’re different from other organizations. And thus the team – when they don’t have much things going around – invest time in learning and doing proof of concepts which will help the team to understand how things work, which results in customer appreciation for the team. The below JSON documents are created by our team to enforce AWS tags while creating AWS resources, and we share this to you too!!

Both the documents are IAM policies – one related to EC2 instances and the other related to EBS Volumes. Coming to the first one:

Deny Creating EC2 Instance Without AWS Tags

This JSON based IAM policy, enforces tagging EC2 instances at the time of creation. It restricts creating EC2 instance without pre-defined “tag keys”. This eliminates the need for AWS Lambda script that would automate the tagging process, when you don’t want to spend additional cost on that.

The infrastructure team at DevOpSpace always tries to prove that they’re different from other organizations. And thus the team – when they don’t have much things going around – invest time in learning and doing proof of concepts which will help the team to understand how things work, which results in customer appreciation for the team. The below JSON documents are created by our team to enforce AWS tags while creating AWS resources, and we share this to you too!!

Both the documents are IAM policies – one related to EC2 instances and the other related to EBS Volumes. Coming to the first one:

Deny Creating EC2 Instance Without AWS Tags

This JSON based IAM policy, enforces tagging EC2 instances at the time of creation. It restricts creating EC2 instance without pre-defined “tag keys”. This eliminates the need for AWS Lambda script that would automate the tagging process, when you don’t want to spend additional cost on that.

“_comment”: “Tag based EC2 instance creation created by Team DevOpSpace”
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “EC2CreateInstanceWithTag”,
“Effect”: “Deny”,
“Action”: “ec2:RunInstances”,
“Resource”: “arn:aws:ec2:us-east-1:AWSaccount:instance/*”,
“Condition”: {
“StringNotLike”: {
“aws:RequestTag/environment”: “*”
}
}
}

]
}

Deny Creating EBS Volumes Without AWS Tags

This JSON based IAM policy, enforces tagging EBS Volumes at the time of creation. It restricts creating EBS volumes without pre-defined “tag keys”. This would ensure that there will be tags whenever an EBS volume is created.

“_comment”: “Tag based EBS volume creation created by Team DevOpSpace”
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Sid”: “EBSVolumeWithTag”,
“Effect”: “Deny”,
“Action”: “ec2:CreateVolume”,
“Resource”: “arn:aws:ec2:us-east-1:AWSaccount:volume/*”,
“Condition”: {
“ForAllValues:StringEqualsIfExists”: {
“aws:RequestTag/environment”: “*”
}
}
}
]
}

There are variables in the code which has to be replaced appropriate values. The code even works for a user who has complete allow permission in EC2/EBS. It has to be noted that we don’t have any control over tag values. Try this out and let us know how it goes.

The post Enforce AWS Tags While Creating AWS Resources appeared first on DevOpSpace.

]]>
https://devopspace.com/ae/enforce-aws-tags-while-creating-aws-resources/feed/ 0
AWS On-Demand EC2 Default Limit Change to vCPU Based https://devopspace.com/ae/aws-on-demand-ec2-default-limit-change-to-vcpu-based/ https://devopspace.com/ae/aws-on-demand-ec2-default-limit-change-to-vcpu-based/#respond Thu, 24 Jun 2021 04:55:50 +0000 https://demo.casethemes.net/itfirm/?p=218 With over a decade experience, we’ve established ourselves as one of the pioneering agencies in the region. We understand the importance of approaching each work integrally and believe in the power of simple and easy communication.

The post AWS On-Demand EC2 Default Limit Change to vCPU Based appeared first on DevOpSpace.

]]>
Day before yesterday (to be precise on September 24, 2019), there was an important announcement from Amazon Web Services on the On-Demand EC2 instances default limit. We received a lot of queries, as soon as our customers received this information via email. And this blog is to share what has changed and how it affects you.

Ok, before we deep dive into the change, let’s understand the current scenario on the On-Demand AWS EC2 instance limit. If you’re a cloud engineer, you would say that the limit is based on instance count based on instance type. And yes, that’s true. So, the default limit of any general purpose EC2 instances (like t3 and m5) is 20 in any region. As you move towards higher-end configuration instances, this decreases to 10, 5 or even 1 (for “p” and “g” type). As per the recent announcement, this will change with effect from Oct 24, 2019.

Now, coming to the change, AWS has changed the way on how the default limit for On-Demand EC2 works. Instead, going forward, the default limit will be based on the virtual CPU’s attached to your running EC2 instances. Adding to that, there will be only five different On-Demand instance limits – one each for General Purpose instances, “F” type, “G” type, “P” type, and “X” type. Again, this limit is also per region.

So, what’s this limit now? Here you go:

Day before yesterday (to be precise on September 24, 2019), there was an important announcement from Amazon Web Services on the On-Demand EC2 instances default limit. We received a lot of queries, as soon as our customers received this information via email. And this blog is to share what has changed and how it affects you.

Ok, before we deep dive into the change, let’s understand the current scenario on the On-Demand AWS EC2 instance limit. If you’re a cloud engineer, you would say that the limit is based on instance count based on instance type. And yes, that’s true. So, the default limit of any general purpose EC2 instances (like t3 and m5) is 20 in any region. As you move towards higher-end configuration instances, this decreases to 10, 5 or even 1 (for “p” and “g” type). As per the recent announcement, this will change with effect from Oct 24, 2019.

Now, coming to the change, AWS has changed the way on how the default limit for On-Demand EC2 works. Instead, going forward, the default limit will be based on the virtual CPU’s attached to your running EC2 instances. Adding to that, there will be only five different On-Demand instance limits – one each for General Purpose instances, “F” type, “G” type, “P” type, and “X” type. Again, this limit is also per region.

So, what’s this limit now? Here you go:

You need not worry on this transition, if you have a very minimal number of EC2 instances depending on what those instance types are. But yes, you would need to plan out for this transition in case if you at least have a medium sized infrastructure. You can check this out by calculating the number of EC2 instances you have, and the total number of vCPUs used based on those instance types. The whole transition is completed in 11 days by AWS, starting from Oct 24, depending on your AWS account ID. And don’t worry, AWS says they’ll automatically provision you the required number of vCPUs based on your usage, but you can always contact AWS support to get the limits raised soon.

For example, your AWS account has 13 EC2 instances in any given region. This collection has a mixture of the instance families like “T”, “R”, “G” and “M” in the following configuration:

4 t3.medium
3 t2.micro
1 r5.2xlarge
3 m4.large
2 g4dn.2xlarge

In the above combination, there’s a group with the instance types “T”, “R” and “M” and another group with “G” family. Here’s the calculation: T3 Medium has 2 vCPUs and total across 4 EC2 instances are 8. T2 Micro has 1 vCPU and total across 3 EC2 instances are 3. R5.2xlarge has 8 vCPUs and total is 8. M4 Large has 2 vCPUs and total across 3 EC2 instances are 6.
So, this family has a total of 25 vCPUs.

Coming to “G” family, this instance type has 8 vCPUs and counting against 2 instances make it to 16.

In the above scenario, you are not affected with this transition and need not make any changes or limit raise requests with AWS.

DevOpSpace is always passionate on helping organizations and business when it comes to the cloud aspects. Do let us know if you need us to help you on your cloud journey. With our passionate cloud and DevOps experts, we can always help to smoothen your organizational journey on cloud.

The post AWS On-Demand EC2 Default Limit Change to vCPU Based appeared first on DevOpSpace.

]]>
https://devopspace.com/ae/aws-on-demand-ec2-default-limit-change-to-vcpu-based/feed/ 0