Saturday, 5 April 2014

How to Setup Bitnami Joomla on Amazon ec2

Joomla is one of the popular content management systems. You can install joomla on ec2 manually or you can use bitnami AMI's for one click install. In this tutorail i will explain how to set up joomla on ec2 using bitnami AMI.

Also read: How to install joomla on amazon ec2 manually.

Setting up Joomla Using Bitnami:

1. Go to amazon market place

2. Search for joomla bitnami

3. Select the bitnami joomla ami


4. Click continue

 5. select "launch with ec2 console"  and scroll down a bit and select the joomla version and OS of your choice. Click "launch with ec2 console" option with the region of your choice.

6. It will take you to the ec2 management console. launch the instance like you would launch an instance normally. Select the instance type, security groups (with ports 80 ,8080 open) , select the key pair and launch the instance.

7. Once the instance is up and running, copy the public ip of your instance and paste it in new browser window and hit enter. It will take you to the bitnami start page.


8. Click access my application and you will be able to access your application. The default user name will be "user" and the password is "bitnami". You can access the application and application dashboard from the following urls.
http://<public ip or  public dns>/joomla
https:/<public ip or  public dns>/administrator
9. By default you wont be able to access the phpmyadmin page for security reasons. So you have to make some changes to access the phpmyadmin page using your instance public ip or public dns.

10. Connect the instance using putty.

Read: how to connect an ec2 instance using putty

11. open the httpd-app.conf file using the following command.
vi /home/bitnami/apps/phpmyadmin/conf/httpd-app.conf 
12 Once you open the file, uncomment the commented lines and change "allow from 127.0.0.1" to "allow from all". The file should look like the following


13. Now restart the apache server using the following command.
sudo /opt/bitnami/ctlscript.sh restart apache
14. Now , you will be able to access phpmyadmin page using the public dns . The default user name is root and password is bitnami. You can access phpmyadmin from the following url.
http://<publicip or dns>/phpmyadmin
Accessing phpmyadmin through tunneling:
You can access phpmyadmin using your localhost by creating a tunnel between your system and the web server. Issue the following command in the terminal and hit enter to create a tunnel.
ssh -N -L 8888:127.0.0.1:80 -i /home/bitnami.pem bitnami@ec2-64-146-250-177.us-west-2.compute.amazonaws.com
In the above command, /home/bitnami.pem is the path to the key pair you have for the bitnami instance. Replace ec2-64-146-250-177.us-west-2.compute.amazonaws.com with the public dns of your instance.

After running the above command successfully , you can access phpmyadmin using your localhost address using the following url.
http://localhost:8888/phpmyadmin

Share this post and leave a comment for queries.

Read More...

Saturday, 29 March 2014

How to install and setup Plesk on Amazon EC2

 Plesk:

Plesk is a control panel solution for system administrators and webmasters. plesk is a product of parallels.
plesk ec2

Use case:

If you think of good dashboard solution for hosting multiple websites for multile users on ec2 machine.

Setting up plesk on ec2:

You can set up plesk on ec2 using two methods.

1. Launch the licence attached plesk ami  from amazon market place. You will have to pay for amazon resources as well as the plesk licence on per hour basis. Plesk licence would cost  come around  $0.07 per hour. This offers a 14 day trial package. You will be charged once the sunscription is over. The trial is only for plesk licence, you will be charged for the aws resources you use.

Note: You cannot launch plesk in a micro instance. The minimum supported instance is m1.small

2 .Launch an instance with plesk and use your own licence if you have purchased one.

I would recomend 1st method. In this tutorial i will walk you through the plesk setup using the trial ami.

Instance setup:

1. Go to AWS management console , select the aws region and navigate to the ec2 section.

2. Click launch instance, select the market place option and search for plesk using the search box.

3. Select the ami , select the instance type any launch the instance like launching normal instances. Open port 8443, 80, 443 on security groups. port 8443 is used by plesk dashboard.

Read: How to launch an instance on amazon ec2

4. Once launched, login to the instance using putty.

Read: How to connect an instance using putty

5. Update the server packages using the following command.
sudo yum update
6.Run the following command to set the server ip for plesk. I recommend you to use the elastic ip for your instance because the public ip changes every time you restart the server and you will have to run the following command every time you restart the server to access the plesk dashboard.
sudo /usr/local/psa/bin/amazon_setup_ip <your server ip> 
6. You can access the plesk dashboard using https://yourip:8443. The default user name is admin and you can get the password from the server by running the following command on putty.
sudo /usr/local/psa/bin/admin --show-password
7. get the password and login using the username and password.

Once you logged in to the plesk dashboard, you can configure plesk based on the configuration you want. leave a comment for any queries.

Read More...

Thursday, 13 March 2014

How To Install Magento On Amazon EC2

Magento is an open source content management system for ecommerce based web applications. It is a popular ecommerce open source web application. There is also an enterprise edition for magento. So, the open source can be used for small scale ecommerce websites and you can modify the application based on your necessities. Enterprise edition on the other hand can be used for high end ecommerce site and the application customization preferences are more when likened to the open source version.

You can read a comparision between Joomls CMS and Magento here

In this tutorial I will explain how you can set up an open source magento framework on amazon ec2 machine.

Ec2 instance Configurations:
1. Launch an Ubuntu instance using the management console. When launching the instance , make sure you open port 80 in the security groups.

Read: How to launch an Ubuntu instance on amazon ec2

2. Connect the instance using putty.

Read: How to connect ec2 instance using putty.

3. Login as root user.
sudo su
4. Update the server
apt-get update
Install LAMP stack:
Magneto backend need apache , Php and Mysql database. You can configure all three applications using one command.

1. Install and configure Lamp stack.
apt-get install  -y lamp-server^
2. Create a root password for mysql and confirm it.

3. You can check if apache and mysql service is running using the following commands
service apache2 status
service mysql status
4. Magento needs a database on mysql server. You can create a databse on mysql server using command line and phpmyadmin. I prefer phpmyadmin, since you can manage your mysql server from the browser using GUI.

5. Install phpmyadmin on Ubuntu instance
apt-get install phpmyadmin
You will be prompted to select the webserver. Select apache using the space bar and hit enter. Then you will be prompted to enter the phpmyadmin root password. Give a strong password.

Phpmyadmin has to be integrated with mysql-server. So when prompted select db-conf and enter the mysql root password you created during the LAMP stack installation.

6. Once installed, you can access the phpmyadmin dashboard using the public ip, elastic ip or the Public dns of your instance followed by /phpmyadin.
54.154.35.67/phpmyadmin
7. Login to phpmyadmin using the credentials you created . The defatult username is root and password is the password you created during phpmyadmin setup.

8. Create a database magentoDB for magento application using phpmyadmin.

9. Click the database option in the top navigation panel and enter magentodb for the database name and hit enter.
Download Magento
1. Download magento to the /var/www folder.
wget http://www.magentocommerce.com/downloads/assets/1.8.1.0/magento-1.8.1.0.tar.gz
2. Untar the file
tar –xvzf mag*
3. Change the file permissions for the following folders to give magneto write permission on those folders.
chmod -R o+w magento/app/etc/
chmod -R o+w magento/var/
chmod -R o+w magento/media/
4. Add mcrypt extension to the php.ini file located in /etc/php5/apache2/php.ini
vi  /etc/php5/apache2/php.ini
extension=mcrypt.so
5. Install php5 curl.
apt-get install php5-curl
6. Add curl extension to the php.ini file.
vi  /etc/php5/apache2/php.ini
Extension=curl.so
Installing Magneto stack:
1. Go to http:// /magento from your browser. Magento installation wizard will appear.

eg : http://54.23.154.34/magento
2. Tick the terms and conditions and hit continue.

3. Select the timezone, locale and currency and hit continue.

4. Enter the MySQL credentials and database name “magnetodb” in the required fied. Enter username and password for mysql . Check all the details and click enter.

Note: You can use Amazon RDS for the backend database. If you are using RDS MySQL provide the database endpoint in the host field. And give the username and password for the RDS database server


5. For admin account, enter your personal information, login information ,encryption key and hit continue.

6. That’s it!! Installation is done!! You can now access the front end and the backend using the options given in the page. Survey is optional.


7. Frontend and backend access urls.
http://54.156.89.24/magento/index.php/admin 
http://54.186.99.211/magento/index.php/ 



Read More...

Monday, 3 March 2014

How To Install and Configure Nginx on Amazon ec2 RHEL and Ubuntu Instances

Nginx:
Nginx is a webserver like apache. Performance wise, nginx is considered to be better than apache. Processing request in Nginx is even based as opposed to the spawning new thread model in apache.

In this tutorial i will explain how to install and configure Nginx on ec2 RHEL and ubuntu instances.

RHEL:
The process for installing NginX on RHEL , Centos and Amazon Linux is the same.

1.Launch a RHEL instance using the management console. while launching the instance , cofigure the security group to allow traffic from http 80 port.

2. Connect the instance using putty.

Note: If you have apache server installed and running already, you have to stop the apache service. If you don’t , it will create conflicts.

1. In RHEL you cannot download and install nginx directly. You have to setup the epel ( extra packages for enterprise linux) repo to install nginx .

2. Dowload the epel repo rm for RHEL 6 64 bit
wget http://download.fedoraproject.org/pub/epel/4/x86_64/epel-release-4-10.noarch.rpm
3. Install the rpm using following command
rpm –ivh  epel-release-6-8.noarch.rpm
4. Check if the repository is enabled
yum repolist
5. Install Nginx using the following command.
yum install Nginx
6. Start the Nginx server
service start Nginx    or
/etc/init.d/nginx start
7. By default RHEL allows traffic only to port 22. Even if you have port 80 opened in security groups you wont be able to access the web server. So you have to accept port 80 connection in the server iptbales.

8. Open iptables using the following command
vi   /etc/sysconfig/iptables
9. Add the following line to the file
-A INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
10. save the file and restart the iptables
service iptables restart
10. Use the public ip or the dns in your browser. You will see a Nginx welcome message.


Setting up Virtual Hosts: (server blocks)
1. Create the website folder and a public folder to put the static assets inside /var/www. Here am going to give the name as comtechies.com. Folder name can be any user defined name.
mkdir –p /var/www/comtechies.com/public
2. Create an index.html inside the public folder and put any basic html page.
<html>
<body>
<h1> this is a test for RHEL virtual host setup</h1>
</body>
</html>
3. Change the owner for your website files to have read write permissions.
chown  ec2-user:ec2-user  /var/www/comtechies.com/public
4. Allow read access to /var/www for everyone to access the website publicly.
chmod 755 /var/www
5. To add a virtual host, edit the virtual.conf file using the following command
vi /etc/nginx/conf.d/virtual.conf
6. Make the configuration changes like the following and save the file.
server {
    listen       80;
  # listen       somename:8080;
    server_name  ec2-54-176-1-191.us-west-1.compute.amazonaws.com;
    location / {
        root  /var/www/comtechies.com/public;
        index  index.html index.htm;
    }
}
Above, for server name I have put the public dns of my amazon machine. Since I don’t have a custom domain , I am testing the virtual host setting using the public DNS.

7. Restart the nginx server
service nginx restart
8. If you use the public dns , the server will throw the following error when you restart the nginx server.
nginx: [emerg] could not build the server_names_hash, you should increase server_names_hash_bucket_size: 64
9. Since the dns name is pretty long , you have to add the server name bucket size to 128 to the nginx.conf file.
10. Open the nginx.conf file
vi /etc/nginx/nginx.conf
11. Add the following to the http part of the file.
server_names_hash_bucket_size 128;
12. If you restart the Nginx server, you wont be getting the bucket size error.

13. Check the virtual host setting by visiting the public dns or the ip. You will see the demo html page you put in the /var/www/comtechies.com/public folder.

14. If you want to add more websites, you just have to add one more virtual host configuration like we did in the virtual.conf file.

Load balancing using Nginx:
You can use Nginx as a load balancer to balance the load between server fleets. Nginx proxies the incoming requests and sends it to the backend servers. To configure Nginx as a load balancer, you have to add two blocks of code to the nginx.conf file.

1. Open nginx.conf file
vi  /etc/nginx/nginx.conf
2. Add the upstream group under the http section. Upstream group is the group of servers which comes under the load balancer. You can give any user defined name for the group. Here am going to give the name as “web_fleet”.

Note: this configuration should be present inside the http section of nginx.conf file.
upstream web_fleet {
    server 54.136.14.10:80;
    server 64.156.14.11:80;
    server 94.176.14.12:80;
}
3. Now you have to set the vhost configuration to receive traffic from a particular domain name and route it to the upstream servers. Add the following lines followed by the upstream block.
server {
    listen 80;
    server_name ec2-54-186-14-165.us-west-2.compute.amazonaws.com;
    location / {
        proxy_pass http://web_fleet;
    }
}
Here the server name I have given is the public DNS of the Nginx EC2 server. Here is where you provide your custom domain. For testing purpose you can use the public DNS of your EC2 server.

4. Save the nginx.conf file and restart the nginx server
Service nginx restart
5. Now, if you access your nginx server using the public DNS, the request will be routed to the backend server fleet present in the upstream block. You can test this by installing apache server in any server and give the servers ip address inside the upstream block.

There are many other parameters and setting associated with the load balancing configuration. You can check the official Nginx documentation for more clarification.

Ubuntu:

1. launch an ubuntu instance with port 80 open in the security group.

2. Connect the instance using putty

3. Issue the following command to become the root user.
sudo su 
4. Install Nginx from the apt respository
apt-get install nginx
5. After installation start the Nginx service using any one of the following commands.
/etc/init.d/nginx start   or
service nginx start 
6. Copy and paste the public ip or the DNS name of your Ubuntu server and hit enter. You will see a welcome message from the Nginx server.

Setting up server blocks (virtual hosts):
If you want to host more than one website in your server ,you should set up virtual hosts for each website.

1. Create a directory for your website inside /var/www folder.
mkdir -p /var/www/yoursite.com/public
The public folder will contain all the static pages of your website.
Yoursite.com should be the actual custom domain you point to the server ip.
2. Change the owner to ubuntu using the following command.
chown -R ubuntu:ubuntu /var/www/yoursite.com/public
3. Change the read permission of the www folder, so that everyone can access the website.
chmod -R /var/www
4.Put a sample html page inside the public folder
nano /var/www/yoursite.com/public/index.html
Enter any sample html code
<html>
<h1> This is the virtual host for yoursite.com</h1>
</html>
5.Copy the default virtual host file to yoursite.com
cp /etc/apache2/sites-available/default /etc/apache2/sites-available/yoursite.com
6. Open the yoursite.com file under sites-available folder and copy the following contents.
server {
        listen   80; ## listen for ipv4; this line is default and implied
        
        root /var/www/example.com/public_html;
        index index.html index.htm;

        # Make site accessible from http://localhost/
        server_name yoursite.com;
}
7. create a link between sites-valilable and sites-enbaled yoursite.com file
ln -s /etc/nginx/sites-available/yoursite.com /etc/nginx/sites-enabled/yoursite.com
8. restart the Nginx server
service nginx restart

Load balancing configurations are the same as RHEL for Ubuntu.

Share the article and leave a comment for queries.

Read More...

Saturday, 1 March 2014

Saltstack : Creating and Deploying Salt Formulas On Minions

Test if the minion is connected to the server using the ping command. It returns true if the master and the minion is in connection. Here am using my minion "jarvis"
salt Jarvis test.ping
The application configuration for the minion is written in a state file with .sls (salt state file) extension.
The state files should be present inside /srv/salt folder. Create salt folder inside /srv
mkdir –p  /srv/salt
In this tutorial am going to create and deploy a simple sate file for installing and starting apache webserver .
You can have the salt file inside the /srv/salt folder. But for good file organization, create a folder named webserver inside the salt folder.
mkdir  -p /srv/salt/webserver
Create a apache.sls file inside webserver folder.
vi apache.sls
Copy the following inside the apache.sls file.
Apache-webserver:
 pkg.installed:
  {% if grains['os'] == 'Ubuntu' %}
   - name: apache2
  {% endif %}
 service.running:
  {% if grains['os'] == 'Ubuntu' %}
  - name: apache2
  {% endif %}
Apache-webserver: It is an user defined name. It can by any name, which serves as a n id.
pkg.installed: This command ensures that the package defined under this command will be installed
{% if grains['os'] == 'Ubuntu' %}
   - name: apache2
  {% endif %}
Apache package name differs with OS family. So we use python templating to provide appropriate package name using salt grains. Grains is a module which collects the static information about a minion. So , in the above code it checks if the OS is Ubuntu. If yes then apache2 package will be installed. Got redhat system , you have to define grains[‘os’] == ‘redhat’ and the package name will be httpd.
service.running:
  {% if grains['os'] == 'Ubuntu' %}
  - name: apache2
  {% endif %}
Service.running will ensure that the installed service is in running state. The service name is provided to the state using grains os check.

Now, we have a state file, which will install and start apache webserver. Let say , you have to deploy this state to one of minions in your infrastructure. Here is where top file comes in to play. Top.sls file should be present inside /srv/salt folder.

In top.sls file , we define the minions and states. Top file decides which state should be run on a particular minion or on all minions. You can define various environments inside the top file. So that , you can set specific states, which would run only in the specified environments.

Create a top.sls file inside /srv/salt folder. Top.sls for the created apache state is described below.
base:
  'jarvis':
    - webserver.apache
Base: is the default environment. Normally in base , we would define states that has to be applied on all the minions under the master.

‘jarvis’ is the minion name. If you mention * instead of Jarvis , it will include all the minions under the master.

Webserver.apache is the path to the salt state file. Webserver is the folder inside /srv/salt and apache is the identification for apache.sls file inside webserver folder.

Deploying the state on Minions:
You can deploy the state on a minion from both the master and minion. In other configuration management tools like chef and puppet , you need to configure extra features like mcollective to push a configuration to a node. Push feature in salt comes by default unlike other configuration management tools.

From master:
The following command deploys the state on all minions paralelly as described in the top file.
salt ‘*’ salt.highstate

If you use the minion name instead of  * , the state will be deployed on on that particular minion.
salt Jarvis salt.highstate

The following command installs ntp on all the minions without having a state file. You can check if a particular package is installed using this command.
salt '*' state.single pkg.installed name=ntp
From Minion:
Execute the following command to deploy the salt formulas mentioned in the top file from the minion.
salt-call state.highstate
Read More...

Saturday, 15 February 2014

How To Prepare For AWS certification - AWS Solutions Architect Associate Level


Exam Overview:
1. 55 multiple choice questions
2. 80 minutes
3. 65 is the pass percentage.
4. Multiple choice questions.

I passed AWS certification for Solutions Architect - associate level with 87% marks. I completed the exam in 40 minutes. Questions were quite easy, but the options were tricky. So, i will explain the the steps i followed to get this certification.

Hands On Experience:
Hand on experience is a must for this certification. Most of the questions asked in the exam are scenario based. It will be really tough to pick the right answer, unless you have hands on experience with the core services. I signed up for an Udemy course for AWS cerification (2013). It helped me a lot. I was able to have good hands on sessions and got a good conceptual ideas about the AWS services.

If you are a newbie and you don't have any hands on experience i would suggest you to sign up for the Udemy AWS certification 2013 or 2014 course. Even though i had hands on experience with AWS services, this course gave me an in-depth knowledge about the core services.

Note: Take notes while watching the video course.

Course Link:  AWS Certified Solutions Architect (2013)
                          Amazon Web Services Certified Solutions Architect - AL(2014)

Whitepapers:
You should read all the white papers recommended by AWS. It is really hard to read the whole AWS documentation , while the whitepapers give you an overall conceptual idea about the services. In fact there were few questions from the whitepapers. Read about the recommended whitepapers and sample questions here

Re-invent Videos:
Watch the amazon-reinvent videos for the core services like VPC, RDS, EC2 and S3 from you tube. You will gain good knowledge about AWS services if you watch those videos.

Link: Reinvent VPC video

Reading FAQ's:
Every service has a FAQ section in its documentation. Read those FAQ's. It wont take much time, but it has answers for lots of confusing questions. If you read the FAQ's you will be able to answer nearly 4 to 5 questions. You can eliminate the billing section.

Documentation:
AWS has a really really good documentation for all their services. You can refer these documentations to get started with the implementations.

Use cases:
Try to learn about the use cases of every services like SWF , SQS , SNS, SES etc..

Architectures:
Go through all the reference architectures provided by AWS in this page. Understand clearly about , under what scenarios , particular services  like ec2 , SQS and elastic cache are used.

You might have read those blog posts saying, VPC is the part where you get most of the questions, That is not true. Questions will be there from almost all core services. In VPC you should leran about NAT, public, private subnets, NACL's, security groups and route tables.

Important services to cover:
EC2, VPC, RDS, S3, Route53, cloudfront, cloudwatch, ELB, autoscaling, SWF, SQS, dynamoDB.

Also read: AWS tutorials

Kindly share this post and leave a comment for queries.
Read More...