Wednesday, June 17, 2020

AWS Cloud Using With Terraform Code

         

Hi guys, 

I come with another project of mine. which is something another tech from till now, I shared. It's a cloud Computing based project. Here I am using Terraform for Infrastructure as code. So we create a code in Terraform and just one click it creates a full setup ready for me.

You don't believe me!!!  Wait and read a full blog you can also do this.


First, I tell you what is terraform?


Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.

Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state, and then executes it to build the described infrastructure. As the configuration changes, Terraform can determine what changed and create incremental execution plans which can be applied.

The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

The key features of Terraform are:

»Infrastructure as Code

Infrastructure is described using a high-level configuration syntax. This allows a blueprint of your datacenter to be versioned and treated as you would any other code. Additionally, infrastructure can be shared and re-used.

»Execution Plans

Terraform has a "planning" step where it generates an execution plan. The execution plan shows what Terraform will do when you call apply. This lets you avoid any surprises when Terraform manipulates infrastructure.

»Resource Graph

Terraform builds a graph of all your resources, and parallelizes the creation and modification of any non-dependent resources. Because of this, Terraform builds infrastructure as efficiently as possible, and operators get insight into dependencies in their infrastructure.

»Change Automation

Complex changesets can be applied to your infrastructure with minimal human interaction. With the previously mentioned execution plan and resource graph, you know exactly what Terraform will change and in what order, avoiding many possible human errors.

Today, I talk about more on Infrastructure as code But in future may I talk to some other features also...

First I talk to you about my plan. what I do exactly

In short >>>

1. Create the key and security group which allows the port 80.

2. Launch EC2 instance.

3. In this Ec2 instance use the key and security group which we have created in step 1.

4. Launch one Volume (EBS) and mount that volume into /var/www/html

5. The developer has uploaded the code into GitHub repo also the repo has some images.

6. Copy the GitHub repo code into /var/www/html

7. Create an S3 bucket, and copy/deploy the images from GitHub repo into the s3 bucket and change the permission to public readable.

8 Create a Cloudfront using s3 bucket(which contains images) and use the Cloudfront URL to update in code in /var/www/html.

.........................................................................................................

 For using terraform

  1. First Download terraform software.
  2. Make a new directory(folder) basically workspace.
  3. In this directory make a file with .tf extension.
  4. Then go to this directory with the command prompt.
use code >>>


terraform init >>> for download plugin first.

terraform apply >>> for apply this on that platform.

terraform destroy >>> to destroy the whole environment.

............................................................................................................

Create a key through terraform

resource "tls_private_key" "key"{

algorithm = "RSA"

}

// create a key on aws

resource "aws_key_pair" "key_pair"{

key_name = "mykey1"

public_key = "${tls_private_key.key.public_key_openssh}"


depends_on = [

  tls_private_key.key

    ]

}

// download the key on your system

resource "local_file"  "key_download"{

content = "${tls_private_key.key.private_key_pem}"

filename = "mykey1.pem"


depends_on = [

  tls_private_key.key

    ]

}


key

..................................................................................................

Create a security group

resource "aws_security_group" "allow_tls" {

  name        = "Security_groups"

  description = "Allow TLS inbound traffic"

 

  ingress {

    description = "Allow HTTP from VPC"

    from_port   = 80

    to_port     = 80

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

  }

 

  ingress {

    description = "Allow SSH from VPC"

    from_port   = 22

    to_port     = 22

    protocol    = "tcp"

    cidr_blocks = ["0.0.0.0/0"]

  }


  egress {

    from_port   = 0

    to_port     = 0

    protocol    = "-1"

    cidr_blocks = ["0.0.0.0/0"]

  }


  tags = {

    Name = "first_try"

  }

}

sg

....................................................................................................

 

Create an instance with generated key and security group

provider "aws" {

  region    = "ap-south-1"

  profile   = "harshetjain"

}


variable "image_id" {

  type        = string

  description = "The id of the machine image (AMI) to use for the server."

  default  = "ami-052c08d70def0ac62"

 }


variable "instance_type" {

  type        = string

  description = "The type of the instance (AMI) to use for the server."

  default  = "t2.micro"

}


variable "key_name" {

  type        = string

  description = "The key to use for the server."

  default  = "mykey1"

 }


variable "security_group" {

  type        = string

  description = "The security group to use for the server."

  default  = "Security_groups"

}


resource "aws_instance" "web" {

  ami           = "${var.image_id}"

  instance_type = "${var.instance_type}"

  key_name  = "${var.key_name}"

  security_groups  = ["${var.security_group}"]


   tags = {

Name = "prod"

}

 depends_on = [

       aws_key_pair.key_pair,

       aws_security_group.allow_tls

    ]

}


Here you can do much more things like:

if you write 

variable "key_name" {

  type        = string

  description = "The key to use for the server."

  // default  = "mykey1"

 }

so it asks at the time of run what is your key name.


instance

....................................................................................................

Launch ebs volume and mount into folder

// Create a 1Gib volume in aws

resource "aws_ebs_volume" "block_storage" {

  availability_zone = "${aws_instance.web.availability_zone}"

  size              = 1

  tags = {

    Name = "web_data"

  }

}

// attach this volume with instance

resource "aws_volume_attachment" "ebs" {

  device_name = "/dev/sdh"

  volume_id   = "${aws_ebs_volume.block_storage.id}"

  instance_id = "${aws_instance.web.id}"

  force_detach  = true

// Connect remotely and run some commands

// download the data from github also

 connection {

    type     = "ssh"

    user     = "ec2-user"

    private_key = "${tls_private_key.key.private_key_pem}"

    host     = "${aws_instance.web.public_ip}"

  }


   provisioner "remote-exec" {

    inline = [

"sudo mkfs.ext4  /dev/xvdh",

      "sudo mount  /dev/xvdh  /var/www/html",

      "sudo rm -rf /var/www/html/*",

      "sudo git clone https://github.com/Harshetjain666/git-.git /var/www/html/"

    ]

  }

}

.................................................................................................

  Create a s3 and cloudfront 

// create a s3

resource "aws_s3_bucket" "b" {

  bucket = "terraform-bucket123666665"

  acl    = "public-read"

}


resource "aws_s3_bucket_object" "object" {

  bucket = "${aws_s3_bucket.b.bucket}"

  key    = "data.jpg"

  source = "C:/Users/Harshet jain/Pictures/workspace/terraform.jpg"

  acl = "public-read"

}


locals {

  s3_origin_id = "aws_s3_bucket.b.id"

}

// create a cloudfront

resource "aws_cloudfront_distribution" "s3_distribution" {

  origin {

    domain_name = "${aws_s3_bucket.b.bucket_regional_domain_name}"

    origin_id   = "${local.s3_origin_id}" 

    }


  enabled             = true

  is_ipv6_enabled     = true

  comment             = "First cloudfront through terraform"

  default_root_object = "index.html"


       default_cache_behavior {

    allowed_methods  = ["DELETE", "GET", "HEAD", "OPTIONS", "PATCH", "POST", "PUT"]

    cached_methods   = ["GET", "HEAD"]

    target_origin_id = "${local.s3_origin_id}"


    forwarded_values {

      query_string = false


      cookies {

        forward = "none"

      }

    }


    viewer_protocol_policy = "allow-all"

    min_ttl                = 0

    default_ttl            = 3600

    max_ttl                = 86400

  }


  price_class = "PriceClass_200"


  restrictions {

    geo_restriction {

      restriction_type = "whitelist"

      locations        = ["US", "CA", "GB", "DE","IN"]

    }

  }


  tags = {

    Environment = "production"

  }


  viewer_certificate {

    cloudfront_default_certificate = true

  }

}

s3

cloudfront

...............................................................................................................

After that go to manually and change the image URL. but this is not good practice in this automation world. I will find a way and update you soon...

.................................................................................................

Output>>>

output

I will come back with a new task stay tunned with me...

Thank you for reading...

No comments:

Post a Comment