References

References:-

· http://en.wikipedia.org

· http://www.howstuffworks.com

· http://www.amazon.com

· http://www.novatium.com

· http://aws.amazon.com

· http://www.wisegeek.com

· http://www.utilitycomputing.com

· http://www.webopedia.com

· http://www.thewhir.com

Conclusion

Conclusion:-

In the near future the utility computing paradigm became more real life solution. The main idea of this paradigm is that infrastructure will become more centralized, while devices and user applications will become more decentralized. We foresee the appearance of a few information utility companies that will supply IT infrastructure, and the creation of billions of highly focused information appliances for end users.

Developing utility computing solutions is already very big business to IT corporations. By using different utility computing solutions, customers often get higher and more flexible performance by less money. Less money is wasted because customers using utility computing pay only for the processing power, network bandwidth, software applications and storage that it has been used. They do not pay for devices and that is a big saving of money. Because of that huge improvement of computing, utility computing solutions will become more and more popular. We think that using different utility computing devices will start, and have already been started from big companies which are forced to need more processing power, network bandwidth, software applications and storage.

Advantages of Utility Computing:-

1. The client doesn't have to buy all the hardware, software and licenses needed to do business. Instead, the client relies on another party to provide these services. The burden of maintaining and administering the system falls to the utility computing company, allowing the client to concentrate on other tasks.

2. Utility computing gives companies the option to subscribe to a single service and use the same suite of software throughout the entire client organization.

3. Another advantage is compatibility. In a large company with many departments, problems can arise with computing software. Each department might depend on different software suites. The files used by employees in one part of a company might be incompatible with the software used by employees in another part. Utility computing gives companies the option to subscribe to a single service and use the same suite of software throughout the entire client organization.

Disadvantages of Utility Computing:-

1. Potential disadvantage is reliability. If a utility computing company is in financial trouble or has frequent equipment problems, clients could get cut off from the services for which they're paying.

2. Utility computing systems can also be attractive targets for hackers. A hacker might want to access services without paying for them or snoop around and investigate client files. Much of the responsibility of keeping the system safe falls to the provider


Steps involved in Amazon EC2:-

Sign up Amazon Developer Account

www.amazon.com/aws

Signup & Get

Private key file

X.509 certificate











Install Firefox Plug in

Download it from Amazon.com-developer tool

Launch instance

Amazon Supports Linux(fedora/red hat) Servers

Each server have Static IP address

After Launching server create a public DNS name

SSH to Server

Go to Linux terminal

SSH root@DomainNameServer

Type Password

Will Enter to Amazon Server as root user

Now we need to pay for Processing /Storage

i. Example:-

After logged in to utility server

[root@domainname#]:firefox $

Run Firefox in Server displays in our computer

[root@domainname#]:halt

Shutdown the Server

Working of Amazon EC2

Working of Amazon EC2:-

Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to requisition machines for use, load them with your custom application environment, manage your network's access permissions, and run your image using as many or few systems as you desire.

To use Amazon EC2, you simply:

  • Create an Amazon Machine Image (AMI) containing your applications, libraries, data and associated configuration settings. Or use pre-configured, templated images to get up and running immediately.
  • Upload the AMI into Amazon S3. Amazon EC2 provides tools that make storing the AMI simple. Amazon S3 provides a safe, reliable and fast repository to store your images.
  • Use Amazon EC2 web service to configure security and network access.
  • Start, terminate, and monitor as many instances of your AMI as needed, using the web service APIs.
  • Pay only for the resources that you actually consume, like instance-hours or data transfer.

Detailed Description

Using Amazon EC2 to Run Instances


Amazon EC2 allows you to set up and configure everything about your instances from your operating system up to your applications. An Amazon Machine Image (AMI) is simply a packaged-up environment that includes all the necessary bits to set up and boot your instance. Your AMIs are your unit of deployment. You might have just one AMI or you might compose your system out of several building block AMIs (e.g., webservers, appservers, and databases). Amazon EC2 provides a number of command line tools to make creating an AMI easy. Once you create a custom AMI, you will need to upload it to Amazon S3. Amazon EC2 uses Amazon S3 to provide reliable, scalable storage of your AMIs so that we can boot them when you ask us to do so.

You can also choose from a library of globally available AMIs that provide useful instances. For example, if you just want a simple Linux server, you can choose one of the standard Linux distributions AMIs. Once you have set up your account and uploaded your AMIs, you are ready to boot your instance. You can start your AMI on any number and any type of instance by calling the Run Instances API. If you wish to run more than 20 instances or if you feel you need more than 5 Elastic IP addresses, please complete the Amazon EC2 instance request form or the Elastic IP request form and your increase request will be considered.

Paying for What You Use

You will be charged at the end of each month for your EC2 resources actually consumed.

As an example, assume you launch 100 instances of the Small type costing $0.10 per hour at some point in time. The instances will begin booting immediately, but they won't necessarily all start at the same moment. Each instance will store its actual launch time. Thereafter, each instance will charge for its hours (at $.10/hour) of execution at the beginning of each hour relative to the time it launched. Each instance will run until one of the following occurs: you terminate the instance with the Terminate Instances API call (or an equivalent tool), the instance shuts itself down (e.g. UNIX "shutdown" command), or the host terminates due to software or hardware failure. Partial instance hours consumed are billed as full hours.

Service Highlights:-

· Elastic

o Amazon EC2 enables you to increase or decrease capacity within minutes, not hours or days

· Completely Controlled

o You have complete control of your instances. You have root access to each one. Instances can be rebooted remotely using web service APIs

· Flexible

o You have the choice of several instance types, allowing you to select a configuration of memory, CPU, and instance storage that is optimal for your application.

· Designed for use with other Amazon Web Services

o Amazon EC2 works in conjunction with Amazon Simple Storage Service (Amazon S3), Amazon SimpleDB and Amazon Simple Queue Service (Amazon SQS) to provide a complete solution for computing, query processing and storage across a wide range of applications.

· Reliable

o Amazon EC2 offers a highly reliable environment where replacement instances can be rapidly and reliably commissioned. The service runs within Amazon's proven network infrastructure and datacenters.

· Features for Building Failure Resilient Applications

o Amazon EC2 provides powerful features to build failure resilient applications including:

Multiple Locations

Amazon EC2 provides the ability to place instances in multiple locations. Amazon EC2 locations are composed of regions and Availability Zones. Regions are geographically dispersed and will be in separate geographic areas or countries. Currently, Amazon EC2 exposes only a single region. Availability Zones are distinct locations that are engineered to be insulated from failures in other Availability Zones and provide inexpensive, low latency network connectivity to other

Elastic IP Addresses

Elastic IP addresses are static IP addresses designed for dynamic cloud computing. An Elastic IP address is associated with your account not a particular instance.

· Secure

o Amazon EC2 provides web service interfaces to configure firewall settings that control network access to and between groups of instances.

· Inexpensive

o Amazon EC2 passes on to you the financial benefits of Amazon's scale. You pay a very low rate for the compute capacity you actually consume

Amazon EC2 (Amazon Elastic Compute Cloud):-

Amazon Elastic Compute Cloud, also known as "EC2", is a commercial web service which allows paying customers to rent computers on which to run their own computer applications. EC2 allows scalable deployment of applications by providing a web services interface through which customers can request an arbitrary number of Virtual Machines, i.e. server instances, on which they can load any software of their choice. Current users are able to create, launch, and terminate server instances on demand, hence the term "elastic". The Amazon implementation allows server instances to be created in zones that are insulated from correlated failures. EC2 is one of several Web Services provided by Amazon.com under the blanket term Amazon Web Services (AWS).

It is designed to make web-scale computing easier for developers. It provides you with complete control of your computing resources and lets you run on Amazon's proven computing environment. Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.

Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to requisition machines for use, load them with your custom dons However, there are two commercial utility computing solutions based on virtualization that are more than a year old now

Virtual machines

EC2 uses Xen virtualization. Each virtual machine, called an instance, is a virtual private server and can be one of three sizes; small, large or extra large. Instances are sized based on EC2 Compute Units which is the equivalent CPU capacity of physical hardware.

One EC2 Compute Unit equals 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor. The three available Instance sizes are sized as follows:

1. Small Instance (Default)

· Processor: 1.2 Ghz Opteron or the Xeon

· Platform:32 bit

· RAM:1.6 GB

· Harddisk:160GB

2. Large Instance

· Processor: 1.2 Ghz Opteron or the Xeon

· Platform:64 bit

· RAM:7.5 GB

· Harddisk:850GB

3. Extra Large Instance

· Processor: 1.2 Ghz Opteron or the Xeon

· Platform: 64bit

· RAM: 15 GB

· Harddisk: 1690GB

High-CPU Instances

Instances of this family have proportionally more CPU resources than memory (RAM) and are well suited for compute-intensive applications.

High-CPU Medium Instance Instances of this family have the following configuration:

·            1.7 GB of memory
·                5 EC2 Compute Units (2 virtual cores with 2.5 EC2 Compute Units each)
·                350 GB of instance storage
·                32-bit platform
·                I/O Performance: Moderate


High-CPU Extra Large Instance Instances of this family have the following configuration:

·                7 GB of memory
·                20 EC2 Compute Units (8 virtual cores with 2.5 EC2 Compute Units each)
·                1690 GB of instance storage
·                64-bit platform
·                I/O Performance: High

Utility Computing Solutions:-

Virtualization by itself, however, is not a complete utility computing solution. It need (VM+ Addons (Network Virtualization, IP address allocation etc) .There are many Utility computing Solutions .Mainly used solutions are

· Amazon’s EC2 :

o Amazon Elastic Compute Cloud, also known as "EC2", is a commercial web service which allows paying customers to rent computers on which to run their own computer applications

· 3tera’s AppLogic:

o 3Tera's AppLogic grid operating system eliminates the binding of software to hardware through virtualization.

Here Only Explaining the Amazon EC2

à´¹ൌ ഉതിà´³ിà´Ÿി à´•à´®്à´ª്à´¯ൂà´Ÿ്à´Ÿിà´™്à´™് എനബ്à´²െà´¦്

Utility computing has gained considerable popularity over the past eighteen months as businesses big and small seek to take advantage of the flexibility the new computing model offers.

Virtualization is commonly used for server consolidation, carving physical servers into smaller virtual machines (VM) that can be used as if they were real servers.

Server virtualization solutions like VMware and Xen are used for this purpose

Virtualization by itself, however, is not a complete utility computing solution. However, there are two commercial utility computing solutions based on virtualization that are more than a year old now, Amazon’s EC2 and 3tera’s AppLogic.

Therefore, we can start to evaluate the required elements of a successful utility computing solution based on those services. The rest of this article is a list of services required beyond virtualization in order to build a utility computing system.

1. Storage

Storage is easily the biggest hurdle to utility computing, and if poorly architected can affect cost, performance, scalability and portability of the system.

2. Network virtualization

When installing software on a physical server or virtual machine it’s normal practice for each system to be configured with the name or IP addresses of numerous other resources within the data center.

3. Scheduling

As users start their applications, the utility system needs a scheduling mechanism that determines where virtual machines will run on available hardware resources. The scheduler must deal not only with CPU and memory, but also with storage and network capacity across the entire system.

4. Image management

Experienced users of virtualization have observed how the number of images can seemingly explode. Utility systems need to provide image management that allows users to organize their images and easily deal with version control across the system

5. VM configuration

The tremendous increase in the number of images also exacerbates the manual configuration of virtual machines. Unlike physical servers which are usually configured carefully once and then ideally left alone for a long time, in utility computing systems VMs are frequently moved around and reconfigured, restarted or shut down

6. IP address allocation

IP address assignment can create bindings between virtual machines, yet applications often require static IP addresses for public facing interfaces.

7. Monitoring/high availability

With applications running on a utility computing service, system administrators still need to be able to monitor operations and create systems that offer high availability.

Extended services

The preceding 7 services are those clearly recognizable as being required for basic utility computing based on existing commercial utility computing systems, but it’s not a complete list of possible innovations. Other services may be needed in order to build commercially viable systems. Here are few examples:

  • import/export of VMs, including multiple VMs and their configuration, in a way that can be recovered elsewhere
  • dynamic resizing of VMs, handling live migration and its interactions with the storage systems

In summary, the current level of virtualization technologies is inadequate to support and deliver true utility computing systems.

Role of Grid Computing?

Grid Computing:-

Grid computing systems work on the principle of pooled resources. Share the load across multiple computers to complete tasks more efficiently and quickly. The grid computing concept isn't a new one. It's a special kind of distributed computing. In distributed computing, different computers within the same network share one or more resources. In the ideal grid computing system, every resource is shared, turning a computer network into a powerful supercomputer. With the right user interface, accessing a grid computing system would look no different than accessing a local machine's resources. Every authorized computer would have access to enormous processing power and storage capacity. Before going too much further, let's take a quick look at a computer's resources:

· Central processing unit (CPU): A CPU is a microprocessor that performs mathematical operations and directs data to different memory locations. Computers can have more than one CPU.

· Memory: In general, a computer's memory is a kind of temporary electronic storage. Memory keeps relevant data close at hand for the microprocessor. Without memory, the microprocessor would have to search and retrieve data from a more permanent storage device such as a hard disk drive.

· Storage: In grid computing terms, storage refers to permanent data storage devices like hard disk drives or databases.

Normally, a computer can only operate within the limitations of its own resources. There's an upper limit to how fast it can complete an operation or how much information it can store. Most computers are upgradeable, which means it's possible to add more power or capacity to a single computer, but that's still just an incremental increase in performance.

Grid computing systems link computer resources together in a way that lets someone use one computer to access and leverage the collected power of all the computers in the system. To the individual user, it's as if the user's computer has transformed into a supercomputer.

Grid computing is a kind of High-PerformanceCcomputing (HPC), an emerging technique in which multiple computers link together to combine resources.”

Note:- Grid computing is still a developing field and is related to several other innovative computing systems, some of which are subcategories of grid computing. Shared computing usually refers to a collection of computers that share processing power in order to complete a specific task. Then there's a software-as-a-service (SaaS) system known as utility computing, in which a company offers specific services (such as data storage or increased processor power) for a metered cost. Cloud computing is a system in which applications and storage "live" on the Web rather than on a user's computer.

How Grid Computing Works:-

In general, a grid computing system requires:

· At least one computer, usually a server, which handles all the administrative duties for the system. Many people refer to this kind of computer as a control node. Other application and Web servers (both physical and virtual) provide specific services to the system.

· A network of computers running special grid computing network software. These computers act both as a point of interface for the user and as the resources the system will tap into for different applications. Grid computing systems can either include several computers of the same make running on the same operating system (called a homogeneous system) or a hodgepodge of different computers running on every operating system imaginable (a heterogeneous system). The network can be anything from a hardwired system where every computer connects to the system with physical wires to an open system where computers connect with each other over the Internet.

· A collection of computer software called middleware. The purpose of middleware is to allow different computers to run a process or application across the entire network of machines. Middleware is the workhorse of the grid computing system. Without it, communication across the system would be impossible. Like software in general, there's no single format for middleware.

If middleware is the workhorse of the grid computing system, the control node is the dispatcher. The control node must prioritize and schedule tasks across the network. It's the control node's job to determine what resources each task will be able to access. The control node must also monitor the system to make sure that it doesn't become overloaded. It's also important that each user connected to the network doesn't experience a drop in his or her computer's performance. A grid computing system should tap into unused computer resources without impacting everything else.

Overall Working:-



*






**Control Server:-Control the Grid

* Grid nodes:-A network of computers running special grid computing network software

* Thin Client:-Client having no Memory/low powered PC (Existing System can also use utility

Computing services)

· Here the network must be highly reliable

· Utility Computing also use virtualization on each nodes

· Task are distributed over the grid nodes for processing

· Processed output will be sent back to the thin client/requester

Server side

Server side:-

"Utility computing" has usually envisioned some form of virtualization so that the amount of storage or computing power available is considerably larger than that of a single time-sharing computer. Multiple servers are used on the "back end" to make this possible. Main technology behind are

1. Virtualization

2. Grid Computing

1. Virtualization:-

In computing, virtualization means to create a virtual version of a device or resource, such as a server, storage device, network or even an operating system. Utility computing mainly deals with Server Virtualization.


Server virtualization attempts to address both of these issues in one fell swoop. By using specially designed software (VM ware, Virtual PC etc), an administrator can convert one physical server into multiple virtual machines. Each virtual server acts like a unique physical device, capable of running its own operating system (OS). In theory, you could create enough virtual servers to to use

all of a machine's processing power, though in practice that's not always the best idea. Main Virtualization used in this context are

1. Machine Virtualization

2. Application Virtualization

Machine Virtualization:-

Here the entire Operating system is virtualized

2. Application Virtualization:-

Here application we needed to virtualize are virtualized

Note:-

· *Virtual Hardware: - Until recently, the only way to create a virtual server was to design special software to trick a server's CPU into providing processing power for multiple virtual machines. Today, processor manufacturers like Intel and AMD offer processors with the capability of supporting virtual servers already built in. The hardware doesn't actually create the virtual servers -- network engineers still need the right software to create them

· Server computers:-machines that host files and applications on computer networks -- have to be powerful. Some have central processing units (CPUs) with multiple processors that give these servers the ability to run complex tasks with ease. Computer network administrators usually dedicate each server to a specific application or task.

Advantages of Server Virtualization:-

· Migration:- An emerging trend in server virtualization is called migration. Migration refers to moving a server environment from one place to another. With the right hardware and software, it's possible to move a virtual server from one physical machine in a network to another. Originally, this was possible only if both physical machines ran on the same hardware, operating system and processor. It's possible now to migrate virtual servers from one physical machine to another even if both machines have different processors, but only if the processors come from the same manufacturer.

Note: While migrating a virtual server from one physical machine to another is relatively new, the process of converting a physical server into a virtual server is also called migration. Specifically, it's physical to virtual migration (P2V).

· Isolation:-Virtual servers offer programmers isolated, independent systems in which they can test new applications or operating systems. Rather than buying a dedicated physical machine, the network administrator can create a virtual server on an existing machine. Because each virtual server is independent in relation to all the other servers, programmers can run software without worrying about affecting other applications.

· Less Power Conception:-Server virtualization conserves space through consolidation. It's common practice to dedicate each server to a single application. If several applications only use a small amount of processing power, the network administrator can consolidate several machines into one server running multiple virtual environments. For companies that have hundreds or thousands of servers, the need for physical space can decrease significantly.

· Reduce Hardware Purchase:-It's possible that much of our everyday computing needs will be handled across a network connection as virtual servers provide applications and storage. As a result, the consumer hardware market could change. You wouldn't need the fastest PC to run the latest software. A remote network of virtual servers could handle the processing, and all you would need is a simple networked terminal to access it.

Disadvantages of Server Virtualization:-

· No demand of high Processing Power:- For servers dedicated to applications with high demands on processing power, virtualization isn't a good choice. That's because virtualization essentially divides the server's processing power up among the virtual servers. When the server's processing power can't meet application demands, everything slows down. Tasks that shouldn't take very long to complete might last hours. Worse, it's possible that the system could crash if the server can't meet processing demands. Network administrators should take a close look at CPU usage before dividing a physical server into multiple virtual machines. It's also unwise to overload a server's CPU by creating too many virtual servers on one physical machine. The more virtual machines a physical server must support, the less processing power each server can receive. In addition, there's a limited amount of disk space on physical servers. Too many virtual servers could impact the server's ability to store data.

· Migration Problem(Processor):-Another limitation is migration. Right now, it's only possible to migrate a virtual server from one physical machine to another if both physical machines use the same manufacturer's processor. If a network uses one server that runs on an Intel processor and another that uses an AMD processor, it's impossible to port a virtual server from one physical machine to the other.

Note:-

In the early days of server virtualization, when it came to virtualization software there was only one game in town: VMware. Today, several companies offer virtualization software. Some of it is proprietary, but other programs are open source, created and distributed by the public, rather than a corporation Here are some of the big players in virtualization software:

· FreeVPS

· Microsoft Virtual Server

· Parallels

· Qemu

· SWSoft

· Virtual Iron

· Virtuozzo

· Xen

· Virtual PC 2007

Client Side

Client Side:-

In Client side we usually use “Thin Client “(or Low Powered PC).

Thin Client:-

Thin Client is a low-cost computing device that works in an application server environment. It does not require state-of-the art, powerful processors and large amounts of RAM and ROM. A Thin Client environment also provides assurance for disaster recovery and business continuity as users applications, and configurations are stored on centrally-managed servers with backups.








*Internet2/Cloud May be a server (Virtualized) or Group of Server (Computer Grid)

  • A Hardware/Software that runs application on a server, not on desktop.
  • Key strokes and mouse click are sent over network to the server to process and give back the result…… (Screen).
  • Clients can be a low powered PC or a Thin client device.
  • They Don’t Have HDD, FDD, CDROMS, Cooling Fans, Very Low Processing Power.

Note:-

· Client: - is a computing device/software that retrieves information from a server.

· Thick Client: - is a computing device that includes a software operating system, a powerful processor and a wide range of applications that can execute on the computing device.

Advantages of Thin Client:-

· Dramatically Decreases The TCO by 54% to 57%.

· Decreases IT Cost By 80% through Reduced Staff And Centralized Software Management. Greatly Simplifies The Software Upgrade Over The Network.

· Eliminates The Hardware Upgrades On Client Side.

· Increases End User Productivity. (Limited Access To Authorized Applications & Storage)

· Increased Life Time Of Client. (NO Moving Part ,Less Power Usage)

· Provides Higher Security. (Authentication, virus protection, data on server, theft).

· No Access To HDD, FDD, CD-ROMS. (Avoids Downloads, installations, Junk Data on to the HDD)

· Reduced Power Consumption.

· Centralized Backup. (Home Directory Mapping)

· Simplifies infrastructure

· 286,386,486 PCs Can Be Converted To Thin Client And Can Work At Speed Of 800Mz Celeron Processor.

Disadvantages of Thin Client:-

  • Entry costs are high for servers and installation expertise.
  • More bandwidth is required; multimedia, project-based learning applications run very slowly.
  • Thin-client doesn’t allow the flexibility to load software on the spot
  • PC’s are coming down in costs, and many educators believe users need the fully-functioning capabilities of a desktop.

Thin Clients reduce costs by requiring only a low-cost display unit on each desk, while a server cluster provides the actual computing horsepower. This allows streamlined, centralized management of a large number of desktops, rather than maintaining hardware and software on each individual workstation.



Thunder Rocks