Posts

Showing posts from 2016

Using Python/Boto List number of instances associated with each Security Group in AWS

List number of instances associated with each Security Group import boto ec2 = boto.connect_ec2() sgs = ec2.get_all_security_groups() for sg in sgs:     print(sg.id + '\t' + sg.name  + '\t\t\t' +  str(len(sg.instances())))

Show Running Instances on AWS including key/tag pairs

import boto import json, ast ec2 = boto.connect_ec2() reservations = ec2.get_all_reservations(     filters={'instance-state-name': 'running'}) for reservation in reservations:     for instance in reservation.instances:         print instance.id + ', '+ instance.instance_type + ', ' + str(ast.literal_eval(json.dumps(instance.tags)) )

Create a swarm cluster on AWS with Docker 1.12 (swarm mode) running two test services

Create a swarm cluster on AWS with Docker 1.12 (swarm mode) running two test services Tools required: docker 1.12.0 docker-machine 0.8.0 docker-compose 1.8.0 Kitematic 0.12.0 Boot2Docker ISO 1.12.0 VirtualBox 5.0.24 AWS CLI 1.7.36 AWS CLI assumes credenitals already set c:\users\xxxx\.aws\credentials [default] aws_secret_access_key=xxxxxxxxx aws_access_key_id=xxxxxxxxxxxx Spin up swarm cluster (swarm mode) in AWS docker-machine create --driver amazonec2 --amazonec2-region eu-west-1   aws-swarm-manager docker-machine create --driver amazonec2 --amazonec2-region eu-west-1   aws-swarm-node-1 docker-machine create --driver amazonec2 --amazonec2-region eu-west-1   aws-swarm-node-2 docker-machine create --driver amazonec2 --amazonec2-region eu-west-1   aws-swarm-node-3 docker-machine create --driver amazonec2 --amazonec2-region eu-west-1   aws-swarm-node-4 docker-machine create --driver amazonec2 --amazonec2-region eu-west-1   aws-swarm-node-5 docker-machine ip aws-

Create a swarm cluster with Virtualbox with Docker 1.12 (swarm mode) running two test services

Create a swarm cluster with Docker 1.12 swarm mode running two test services docker 1.12.0 docker-machine 0.8.0 docker-compose 1.8.0 Kitematic 0.12.0 Boot2Docker ISO 1.12.0 VirtualBox 5.0.24 docker-machine create --driver virtualbox swarm-manager docker-machine create --driver virtualbox swarm-node-1 docker-machine create --driver virtualbox swarm-node-2 docker-machine create --driver virtualbox swarm-node-3 docker-machine create --driver virtualbox swarm-node-4 docker-machine create --driver virtualbox swarm-node-5 docker-machine ip swarm-manager > manager_ip.txt set /p MANAGER_IP=< manager_ip.txt docker-machine ssh swarm-manager docker swarm-manager init --advertise-addr %MANAGER_IP% docker-machine ssh swarm-manager docker swarm join-token --quiet manager >manager_token.txt set /p MANAGER_TOKEN=<manager_token.txt docker-machine ssh swarm-manager docker swarm join-token --quiet worker >worker_token.txt set /p WORKER_TOKEN=<worker_token.txt

Bootstrap setup Docker Engine in Swarm

Image
docker-machine create -d virtualbox local @FOR /f "tokens=*" %i IN ('docker-machine env local') DO @%i docker run swarm create docker-machine create -d virtualbox --swarm --swarm-master --swarm-discovery token://b5a6bbbe593a7a888f01183e47e60eb5     swarm-manager docker-machine create -d virtualbox --swarm --swarm-discovery token://b5a6bbbe593a7a888f01183e47e60eb5 swarm-node-1 docker-machine create -d virtualbox --swarm --swarm-discovery token://b5a6bbbe593a7a888f01183e47e60eb5 swarm-node-2 docker-machine create -d virtualbox --swarm --swarm-discovery token://b5a6bbbe593a7a888f01183e47e60eb5 swarm-node-3 @FOR /f "tokens=*" %i IN ('docker-machine env --swarm swarm-manager') DO @%i docker-machine ls docker info Containers: 5  Running: 5  Paused: 0  Stopped: 0 Images: 4 Server Version: swarm/1.2.5 Role: primary Strategy: spread Filters: health, port, containerslots, dependency, affinity, constraint Nodes: 4  

Basic Docker SWARM Cluster with Consul, Vagrant, Docker Toolbox and Virtual Box

Below will help run a swarm cluster locally using Vagrant. This will create and setup 5 vagrant machines in a private network (10.0.7.0/24) Consul Master: 10.0.7.10 Swarm Manager: 10.0.7.11 Swarm node 1: 10.0.7.12 Swarm node 2: 10.0.7.13 Swarm node 3: 10.0.7.14 The steps were tested using the following docker toolbox 1.11.1 docker 1.11.1 vagrant 1.7.2 docker-machine 0.7.0 docker-compose 1.7.0 Kitematic 0.10.2 Boot2Docker ISO 1.11.1 VirtualBox 4.3.26 Batch File to Bootstrap mkdir c:\sd git clone https://github.com/deviantony/vagrant-swarm-cluster.git cd vagrant-swarm-cluster BATCH File .\startup-swarm.bat vagrant up --provider virtualbox docker -H 10.0.7.11:2375 run -d --restart always --name consul1 --net host consul agent -server -bind 10.0.7.11 -client 10.0.7.11 - retry-join 10.0.7.11 -retry-join 10.0.7.12 -retry-join 10.0.7.13 -retry-join 10.0.7.14 -bootstrap-expect 3 docker -H 10.0.7.12:2375 run -d --restart always --name consul2 --net host cons

Modify EBS Volume Size Attached to Running Instance on AWS

Recently we had a requirement to resize a number of EBS volumes across multiple instances.  This was a pretty useful script which we wrapped with a batch script to update all the required AWS instances https://github.com/colinbjohnson/aws-missing-tools/tree/master/ec2-modify-ebs-volume

VictorOps for Noise Reduction - Target Alerts

VictorOps is a DevOps alerting, routing, and real-time incident management solution that decreases the time it takes resolve problems.One of the key advantages of VictorOps is its ability to quickly reduce noise across various alerting system.   https://victorops.com/

serverspec to verify our IIS Configuration

An example serverspec to run some basis tests of IIS website configuration e.g. exists, enabled, running, the app pool under which it is running, the port and protocol binding and the folder/path where the web application code is. describe 'IIS Website resource type' do   describe iis_website('Business Application Root') do     it { should exist }     it { should be_enabled }     it { should be_running }     it { should be_in_app_pool('Business Application Pool') }     it{ should have_site_bindings(443).with_protocol('https') }     it { have_physial_path('d:\\businessapplications\\www') }   end end

Check FTP Publishing with serverspec - Infrastructure Testing

Serverspec tests the actual state of your server infrastructure by executing command locally (CMD), via SSH, via WinRM, via Docker API and so on. There is no agent technology required on your servers and can use any configuration management tools, Puppet, Ansible, CFEngine, Itamae, Saltstack and so on. Create a file CheckFTP.rb with the following contents require 'spec_helper' describe 'FTP Publishing' do   describe service('FTP Publishing') do     it { should be_installed }     it { should be_enabled }     it { should be_running }     it { should have_start_mode('Automatic') }   end   describe port(21) do     it { should be_listening.with('tcp') }   end end Run the serverspec file ruby -S rspec CheckFTP.rb.rb Expected Output FTP Publishing   Service "FTP Publishing"     should be installed     should be enabled     should be running     should have start mode "Automatic"   Port "21"    

TestInfra - Testing Your Infrastructure

Write unit tests in Python with Testinfra to test the state of your servers configured by managements tools like Salt, Ansible, Puppet, Chef. Today TestInfra does not support Terraform but maybe this is something to build as a custom module in our environment Below is a very simple TestInfra script to verify if apache is running vi apachecheck.py def test_apache2_is_installed(Package):     apache2 = Package("apache2")     assert apache2.is_installed     assert apache2.version.startswith("2.2") Then run with testinfra  -v apachecheck.py ==================================================== test session starts platform linux2 -- Python 2.7.3, pytest-3.0.1, py-1.4.31, pluggy-0.3.1 -- /usr/bin/python cachedir: .cache rootdir: /root, inifile: plugins: testinfra-1.4.2 collected 1 items apachecheck.py::test_apache2_is_installed[local] PASSED ================================================== 1 passed in 0.53 seconds Now edit and change the version

Monitoring Performance in Microservice Architectures

Semantic Monitoring Ruxit Terraform and back to college to brush up on graph theory  http://container-solutions.com/monitoring-performance-microservice-architectures/

The Art of Monitoring

This looks interesting http://www.artofmonitoring.com Sample chapter http://www.artofmonitoring.com/TheArtOfMonitoring_sample.pdf

SmartOS is designed for building clouds

SmartOS is a hypervisor lean enough to run entirely in memory, powerful enough to run as much as you want to throw at it. Provisioning is blindingly fast, thanks to zones and ZFS file system creation. https://smartos.org

Linux Backdoor Doesn't Need Root Privileges

http://www.techworm.net/2016/02/russian-hackers-spying-linux-pc-sophisticated-malware-fysbis.html

New Attack Sucks Information from HTTPS

https://guidovranken.wordpress.com/2015/12/30/https-bicycle-attack/

Next Post - Check Multi Path Status & Policy Across Core Infrastructure During SAN Upgrades

From GO to PowerCLI....... How often do you perform routine SAN upgrades such as extending flash disks etc, and your SAN provider asks you, please verify all your multipath is fully working prior and after the upgrade. Multiple systems e.g. VM Cluster, Linux Physical Servers, Linux Appliances which use the SAN.... Rundeck, PowerCLI, Bash & Expect to the rescue ...... coming next....

ROUGH - First Attempt at Site24x7 Provider for Terraform

This is fairly rough at the moment but its the basic structure of a resource within a terraform provider to support Site24x7 API using GO/REST API. // resource_MonitorGroup.go package main import ( "github.com/hashicorp/terraform/helper/schema"   "bytes"   "encoding/json"   "net/http"   "io/ioutil" ) type Site24x7MonitorGroup struct { Code int `json:"code"` Message string `json:"message"` Data struct { GroupID string `json:"group_id"` DisplayName string `json:"display_name"` Description string `json:"description"` Monitors []interface{} `json:"monitors"` } `json:"data"` } func resourceMonitorGroup() *schema.Resource { return &schema.Resource{ Create: resourceMonitorGroupCreate, Read:   resourceMonitorGroupRead, Update: resourceMonitorGroupUpdate, Delete: resourceMonitorGroupDelete, Schema: map[string]*schema

GO - Create Monitor Group in Site24x7 via REST API + JSON

package main import ( "bytes"     "fmt"     "net/http"     "io/ioutil" ) func main() {     url := "https://www.site24x7.com/api/monitor_groups"     fmt.Println("URL:>", url)     var jsonStr = []byte(` {    "display_name": "Test Automation Group",     "description": "Include Test Automation Monitors In This Group" }`)     req, err := http.NewRequest("POST", url, bytes.NewBuffer(jsonStr)) req.Header.Set("Authorization", "Zoho-authtoken <replace with your token-id>") req.Header.Set("Content-Type", "application/json")     client := &http.Client{}     resp, err := client.Do(req)     if err != nil {         panic(err)     }     defer resp.Body.Close()     fmt.Println("response Status:", resp.Status)     fmt.Println("response Headers:", resp.Header)     body, _ := ioutil.ReadAll(r

Convert Json to GO Structure

https://mholt.github.io/json-to-go/

test

test

NEXT - Terraform Provider for Site24x7..............

NEXT POST - Terraform Provider for Site24x7..............

GO Script to return Site 24x7 Monitor Group Details for specific Monitor Group ID

The following will take one argument which is the Monitor Group ID returned within the JSON from the previous blog post script.  This will return the full details for that monitor group i.e. monitors, display name etc To compile go build -tags 'main2' -o GetMonitorGroup.exe // +build main2 package main import (     "fmt" "os"     "net/http"     "io/ioutil" ) func main() { arg := os.Args[1]     url := "https://www.site24x7.com/api/monitor_groups/" + arg       fmt.Println("URL:>", url)     req, err := http.NewRequest("GET", url, nil)     req.Header.Set("Authorization", "Zoho-authtoken  <place your token id here> ")     req.Header.Set("Content-Type", "application/json")     client := &http.Client{}     resp, err := client.Do(req)     if err != nil {         panic(err)     }     defer resp.Body.Close()     fmt.Println("re

GO Routine to Call Site 24x7 API using net/http

Not too familar with GO just yet but needed to create some simple routine to make RESTFUL API call via  HTTP GET to access Site 24x7 API go build -tags 'main1' -o GetMonitorGroups.exe // +build main1 package main import (     "fmt"     "net/http"     "io/ioutil" ) func main() {     url := "https://www.site24x7.com/api/monitor_groups"     fmt.Println("URL:>", url)     req, err := http.NewRequest("GET", url, nil)     req.Header.Set("Authorization", "Zoho-authtoken <place your token id here>")     req.Header.Set("Content-Type", "application/json")     client := &http.Client{}     resp, err := client.Do(req)     if err != nil {         panic(err)     }     defer resp.Body.Close()     fmt.Println("response Status:", resp.Status)     fmt.Println("response Headers:", resp.Header)     body, _ := ioutil.ReadAll(resp.Body)