Posts

Showing posts from 2015

Clean Up Zone Entries in Brocade Switches Using Expect / AWK

Typically after some migration work you may end up with switch configuration on your brocades which is out of synch and in some instances you may have duplicate zone entries.

Below is a simple script that will take a log file which contains duplicate zone entries.  The script will parse this file, connect to a switch and issue the corresponding commands to remove the duplicate zone entries.

The problemzones file may look like

2013/09/22-14:43:25, [ZONE-1010], 52, FID 128, WARNING, SWITCH01, Duplicate entries in zone (ZONE2) specification.
2013/09/22-14:43:25, [ZONE-1010], 51, FID 128, WARNING, SWITCH01, Duplicate entries in zone (ZONE1) specification.
2013/09/22-14:43:25, [ZONE-1010], 51, FID 128, WARNING, SWITCH01, Duplicate entries in zone (ZONE3) specification.
2013/09/22-14:43:25, [ZONE-1010], 51, FID 128, WARNING, SWITCH01, Duplicate entries in zone (ZONE4) specification.
2013/09/22-14:43:25, [ZONE-1010], 51, FID 128, WARNING, SWITCH01, Duplicate entries in zone (ZONE5) specification.

Automate SSH Login to CISCO Device and Capture show tech support to log file with Expect

Step 1: Install expect (Centos 6.7 32bit)
yum install expect

Step 2: Verify expect install correct
expect –v

Step 3: Create Working Folder for Script
cd $HOME
mkdir CiscoAutomation
cd CiscoAutomation

Step 4: Create expect script with vi
vi TestCISCOLogin.exp

=> Script file contents
#!/usr/bin/expect
set timeout 10
set hostname [lindex $argv 0]

set username "username"
set password "password"
set enablepassword "password"

spawn ssh $username@$hostname

expect "Password:" {
  send "$password\n"

  expect ">" {
    send "en\n"
    expect "Password:"
    send "$enablepassword\n"
expect "#"
send "terminal length 0\r"

Automate SSH Login to CISCO Device and Capture show running-config to log file with Expect

Step 1: Install expect (Centos 6.7 32bit)
yum install expect

Step 2: Verify expect install correct
expect –v

Step 3: Create Working Folder for Script
cd $HOME
mkdir CiscoAutomation
cd CiscoAutomation

Step 4: Create expect script with vi
vi TestCISCOLogin.exp

=> Script file contents
#!/usr/bin/expect
set timeout 10
set hostname [lindex $argv 0]

set username "username"
set password "password"
set enablepassword "password"

spawn ssh $username@$hostname

expect "Password:" {
  send "$password\n"

  expect ">" {
    send "en\n"
    expect "Password:"
    send "$enablepassword\n"
expect "#"
send "terminal length 0\r"
expect "#"
send "show running-config\r"
log_file /var/log/cisco-running-config-$hostname
expect "#"
send "exit\n"
  }
  interact

}

Automate SSH Login to CISCO Device with Expect

Step 1: Install expect (Centos 6.7 32bit) yum install expect
Step 2: Verify expect install correct expect –v
Step 3: Create Working Folder for Script cd $HOME mkdir CiscoAutomation cd CiscoAutomation
Step 4: Create expect script with vi vi TestCISCOLogin.exp
èScript file contents #!/usr/bin/expect set timeout 10 set hostname [lindex $argv 0]
set username "username" set password "password" set enablepassword "password"
spawn ssh $username@$hostname
expect "Password:" {   send "$password\n"
  expect ">" {     send "en\n"     expect "Password:"     send "$enablepassword\n"   }   interact
}

Automate Telnet Login to CISCO Device with Expect

Step 1: Install expect (Centos 6.7 32bit) yum install expect
Step 2: Verify expect install correct expect –v
Step 3: Create Working Folder for Script cd $HOME mkdir CiscoAutomation cd CiscoAutomation
Step 4: Create expect script with vi vi TestCISCOLogin.exp
èScript file contents #!/usr/bin/expect set timeout 10 set hostname [lindex $argv 0]
set username "username" set password "password" set enablepassword "password"
spawn telnet $hostname
expect "Username:" {   send "$username\n"   expect "Password:"   send "$password\n"
  expect ">" {     send "en\n"     expect "Password:"     send "$enablepassword\n"   }   interact
}

Coming Soon

Coming Next

1. Automate Network Management and Easier Deployments with Ansible

2. Taking Asterisk to the next level with Infrastructure as Code

Using PowerCLI and OS Customisations to Rapidly Provison a Q/A Environment

In this example below I've presented a PowerCLI script that will provision 20 desktop VMs to be used wihtin a Q/A testing process to support Automated Testing

# We will name the VMs “QAVMUD-01”, “QAVMUD-02”.....

Step 1:  Source Control Your Configuration

a. Download GIT for Windows
b. Create a new folder structure for your project e.g.
mkdir c:\devops\irm\qabuild\git
cd c:\devops\irm\qabuild\git
git init --bare # which creates a new git repo
c. Create a file to hold the list of static IPs you wish to assign
192.168.0.11
192.168.0.12
192.168.0.13
............

md c:\devops\irm\qabuild\working
cd c:\devops\irm\qabuild\working
git clone c:\devops\irm\qabuild\git

Step 2:

$strNameTemplate = “QAVMWSUD-{0:D2}” # QA / Virtual Machine / Windows / User Desktop / Two Digit 00...99

$objCluster = Get-Cluster LocalDC-Non-Production
$objTemplatee = Get-Template QATMPLWIN8UD

$objVMList = @()

for ($i = 1; $i –le 20; $i++)
{
$strVMName = $strNameTemplate –f $i
$objVMList += New-VM –Name $strVMName –ResourcePool…

Arduino – A Cry from the D6 in DCU

Image
What is Arduino, well it’s an open source project which focuses on building micro-controller boards.  The idea behind the project is to make physical computing more accessible.  Best way to think of this is a very small computing device that allows you to create small programs that interact with the physical environment around them.  For example environment sensors.   These micro controllers are made up of a small CPU, some RAM and flash memory.  
Typically you will create the program on another computer (PC) running either Windows, Linux or MAC OS.
Ardunio has its own programming language which is a dialect of C++
Typically the host computer on which you compile the code will interface with the Ardunio via a USB cable.
To find out more check out http://www.ardunio.cc
Watch a video on building an Ardunio which is connected to a Nokia display 



Continuous Deployment the Octopus Way

Intelligent, analytics-driven API performance monitoring

With comprehensive API monitoring from APImetrics, you gain the ability to track every aspect of your API performance and get immediate alerts when something goes wrong, giving you the chance to fix problems before your users ever encounter them.

Visit http://apimetrics.io/

Setup EMC ScaleIO Test Lab on Windows Host with VirtualBox + CentOS

Image
The following is a step by step on how to create a 3 node Vagrant ScaleIO Cluster using

Microsoft Windows as the host operating systemOracle VirtualBox as the HypervisorCentOS 6.5 i386 as the Guest Operating SystemStep 1: Create a working folder structure for the test LAB on your hard drive e.g. 
      C:\emc\scaleio\vagrant-scaleio

Step 2: Download install Oracle VirtualBox.

Step 3: Download and install Vagrant.

Step 4: Jonas Rosland has created a 3 node startup LAB configuration which you can use to fast track deployment.   The quickest and easiest way to download this configuration is to git clone to the folder created above.   So open a command shell
      cd C:\emc\scaleio\vagrant-scaleio
      git clone https://github.com/virtualswede/vagrant-scaleio.git

Step 5: Next you will need to download the relevant RPM packages for CentOS to support a ScaleIO installation from EMC.  You will need an EMC account for this so register at https://support.emc.com/   Once registered download the fol…

Simplify, Automate and Speed-up REST API Testing

vREST is an automated REST API Testing Tool.

The vREST extension  records filtered HTTP requests and their responses in vREST application. This extension works as part of the hosted application vREST (http://vrest.io - An Online tool for Automated Testing, Mocking, Automated Recording and Specification of REST / RESTful / HTTP APIs).

 It is a very simple extension, which records the HTTP requests of web application under test and also records their parameters, headers, responses and it will automatically store them as test cases in vREST. It can be configured to filter out HTTP requests according to content type and URL pattern while recording.

Features include:
A simple and intuitive tool to quickly validate your REST APIs.Deliver zero defect web applications with very less effort in API testing.Works in hosted mode (http://vrest.io).No skilled resources required to validate your web application.Quickly generate documentation for your API specifications.De-couple your frontend develo…

Easier Data Analytics with Keen.io

Keen IO is a platform that takes the headache out of capturing, storing and making sense of large amounts of event-based data

NoCloudAllowed on Kali Linux

Image

Data Visualization & Integration with TEIID

Image
Teiid is a data integration and virtualization engine. Teiid provides seamless integrations with many different varieties of sources like RDBMS, Flat Files, Web Services, packaged applications etc.



Visit TEIID at http://teiid.jboss.org/

Open Source PaaS: Deploy and Manage Applications on your own Servers

Deis is an open source PaaS based on Docker that runs on public cloud, private cloud and bare metal.
Deis allows software teams to deploy and scale almost any application on their own PaaS using a workflow inspired by Heroku. Deis combines Docker’s Linux container engine with infrastructure automation by Chef to create an application platform designed for developers and operations engineer
http://deis.io/

SALT: Infrastructure automation and management system

Salt, a new approach to infrastructure management, is easy enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with those servers inseconds.

Salt delivers a dynamic communication bus for infrastructures that can be used for orchestration, remote execution, configuration management and much more

http://saltstack.com/community/

Packer – Create VM Images Quicker and Easier

Image
Packer is a tool for creating identical images for multiple platforms from a single source configuration.


Packer supports multiple providers including Rackspace, AWS, Digital Ocean, VMWARE, Virtual Box and others.  Let’s look at how we can use Packer to make images on Rackspace.

First step is to download packer on to your Windows or Linux system

https://www.packer.io/

Packer uses JSON templates to define an image. Packer takes this JSON and runs the builds defined, producing a machine image.

To provison a Ubuntu 64bit VM on Virtualbox.  The JSON file will look similar to

{
“variables”: {
“ssh_name”: “trevor”,
“ssh_pass”: “trevor123″,
“hostname”: “packertest”
},
“builders”: [{
“type”: “virtualbox-iso”,
“guest_os_type”: “Ubuntu_64″,
“vboxmanage”: [
[“modifyvm”, “{{.Name}}”, “–vram”, “32”]
],
“disk_size” : 10000,
“iso_url”: “./ubuntu-14.04.1-server-amd64.iso”,
“iso_checksum”: “2cbe868812a871242cdcdd8f2fd6feb9″,
“iso_checksum_type”: “none”,
“http_directory” : “ubuntu_64″,
“http_port_min” …

BRIDGE.NET - Run C# Code on Any Device

BRIDGE.NET is a platform which integrates with Visual Studio.NET and allows you to write your code in C# and then compile to Javascript

This enables you to run that C# code on any device which supports Javascript.

Check out BRIDGE.NET at
http://bridge.net/

C# Code
using Bridge;using Bridge.Html5;namespace DemoApp {publicclassApp{[Ready]publicstaticvoidMain(){ Window.Alert("Hello World");}}}
Javascript Code Generated

Bridge.Class.define('DemoApp.App',{ statics:{ $config:{ init:function(){ Bridge.ready(this.main);}}, main:function(){ window.alert("Hello World");}}});

C/C++ DLL Wrapper for WSC32.DLL

//------------------------------------------
// CustomerSerial.cpp
// Trevor O Connell
//
// C/C++ DLL Wrapper for WSC32.DLL
//

#include "stdafx.h"
#include "CustomerSerial.h"
#include "wsc.h"
#include <stdio.h>

#ifdef _DEBUG
#define new DEBUG_NEW
#endif


BEGIN_MESSAGE_MAP(CCustomerSerialApp, CWinApp)
END_MESSAGE_MAP()


CCustomerSerialApp::CCustomerSerialApp()
{
}

CCustomerSerialApp theApp;

BOOL CCustomerSerialApp::InitInstance()
{
CWinApp::InitInstance();
return TRUE;
}


//################################
//
// Custom Wrapper Routines by TOC
//
//################################



// Function called by first time the DLL is loaded.
void initialize()
{
}

// Called when front end stopped.
void finalize()
{
}


// Pass keycode to WSC DLL
int _SioKeyCode(const struct frontEndInterface &fx) {
  // initialize the status
  short status = -1;

  // check if the in and out parameters are correct
  if (fx.getParamCount() == 1 && fx.getReturnCount() == 1) {

CRATE.IO : The Distributed Database Cluster for Developers

Image
CRATE.IO enables developers to quickly set up a distributed database cluster, either on their own hardware or in a public cloud e.g. AWS.

One of the key advantages of CRATE.IO is its ability to scale.

The guiding principle for CRATE.IO is simplicity. Not only is it easy to set up, but once everything is up and running, developers can use standard SQL queries to work with their data.

Visit Crate at http://crate.io

Interesting Howto at "Using Crate Data with Rails"
http://vedanova.com/tech/open%20source/2014/06/24/using-crate-with-rails.html

To install crate  (CENTOS)

bash -c "$(curl -L try.crate.io)"

You can connect to your twitter account and download tweets from your account and import in to a test table.
From here you can select the console from the web interface and run SQL standard queries against that table.

Run query against the tweets table imported


View status of tables
View status of cluster View console system messages

IoT - Build your connected product

Spark offers a suite of hardware and software tools to help you prototype, scale, and manage your Internet of Things products.
https://www.spark.io/

You can purchase your Spark Dev Kit here
https://store.spark.io/

Highlights

The brain: Spark Core, Photon, or ElectronComponents: Shields, breadboards, jumper cables, and breakout boardsSpark's Bill of Materials: $19 and upThe back-end: Spark's hosted cloud platform (included with hardware)Timeframe: An afternoon to a week

Spark API

Apache Spark is a fast and general-purpose cluster computing system. It provides high-level APIs in Java, Scala and Python, and an optimized engine that supports general execution graphs. It also supports a rich set of higher-level tools including Spark SQL for SQL and structured data processing, MLlib for machine learning, GraphX for graph processing, and Spark Streaming. Check out more on Apache Spark at http://spark.apache.org

Deploy Code from GITHUB to AWS via DPLOY.IO

Recently in deploying a wordpress web site to Amazon we required to work with a third party who used DPLOY.IO for continuous deployment.

To enable this we did the following

1. Create an account on dploy.io
2. Connect to a GITHIUB repo
3. Create an environment which used a SSH connection to our target instance on Amazon
4. Use SSH public key authentication
5. Download the SSH public key from DPLOY.IO
6. Create a user account on our Amazon Instance which we will associate with the downloaded public key
7. Copy the downloaded public key in to the .ssh/authorized_keys  for that account
8. Save the environment on DPLOY.IO and this will test the connection

Use the /var/log/secure to ensure a successful connection is been made


.