-->

The Evolution of Computing: Cloud Computing


Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. Cloud computing is first and foremost a concept of distributed resource management and utilization. It aims at providing convenient endpoint access system while not requiring purchase of software, platform or physical network infrastructure, instead outsourcing them from third parties.

The arrangement may beneficially influence competitive advantage and flexibility but it also brings about various challenges, namely privacy and security. In cloud computing, applications, computing and storage resources live somewhere in the network, or cloud. User’s don’t worry about the location and can rapidly access as much or as little of the computing, storage and networking capacity as they wish—paying for it by how much they use—just as they would with water or electricity services provided by utility companies. The cloud is currently based on disjointedly operating data centers but the idea of a unifying platform not unlike the Internet has already been proposed.

    Cloud Computing 
 
In a cloud computing environment, the traditional role of service provider is divided into two: the infrastructure providers who manage cloud platforms and lease resources according to a usage-based pricing model, and service providers, who rent resources from one or many infrastructure providers to serve the end users. Cloud computing providers offer their services according to several fundamental models: software as a service, infrastructure as a service, platform as a service, desktop as a service, and more recently, backend as a service.
 
The backend as a service computing model, also known as "mobile backend as a service" is a relatively recent development in cloud computing, with most commercial services dating from 2011. This is a model for providing web and mobile applications developers with a way to link their applications to backend cloud storage while also providing features such as user management, push notifications, and integration with social networking services. These services are provided via the use of custom software development kits (SDKs) and application programming interfaces (APIs). Although similar to other cloud-computing developer tools, this model is distinct from these other services in that it specifically addresses the cloud-computing needs of web and mobile applications developers by providing a unified means of connecting their apps to cloud services. The global market for this services has an estimated value of hundreds of million dollars in the next years.
 
      Cloud_Computing
 
Clearly, public cloud computing is at an early stage in its evolution. However, all of the companies offering public cloud computing services have data centers, in fact, they are building some of the largest data centers in the world. They all have network architectures that demand flexibility, scalability, low operating cost, and high availability. They are built on top of products and technologies supplied by Brocade and others network vendors. These public cloud companies are building business on data center designs that virtualize computing, storage, and network equipment—which is the foundation of their IT investment. Cloud computing over the Internet is commonly called “public cloud computing.” When used in the data center, it is commonly called “private cloud computing.” The difference lies in who maintains control and responsibility for servers, storage, and networking infrastructure and ensures that application service levels are met. In public cloud computing, some or all aspects of operations and management are handled by a third party “as a service.” Users can access an application or computing and storage using the Internet and the HTTP address of the service.

Previous Post

The Evolution of Computing: Virtualization


Countless PCs in organizations effectively killed the need for virtualization as a multi-tasking enabled solution in the 1980s. At that time, virtualization was widely abandoned and not picked up until the late 1990s again, when the technology would find a new use and purpose. The opportunity of a booming PC and datacenter industry brought an unprecedented increase in the need for computer space, as well as in the cost of power to support these installations. Back in 2002, data centers already accounted for 1.5 percent of the total U.S. power consumption and was growing by an estimated 10 percent every year. More than 5 million new servers were deployed every year and added a power supply of thousands of new homes every year. As experts warned of excessive power usage, hardware makers began focusing on more power efficient components to enable growth for the future and alleviate the need for data center cooling. Data center owners began developing smart design approaches to make the cooling and airflow in data centers more efficient.

  Datacenter Power&Cooling  
 
At this time, most computing was supported by the highly inefficient x86-based IT model, originally created by Intel in 1978. Cheap hardware created the habit of over-provisioning and under-utilizing. Any time a new application was needed, it often required multiple systems for development and production use. Take this concept and multiply it out by a few servers in a multi-tier application, and it wasn't uncommon to see 8-10 new servers ordered for every application that was required. Most of these servers went highly underutilized since their existence was based on a non-regular testing schedule. It also often took a relatively intensive application to even put a dent in the total utilization capacity of a production server.  
 
 Server Virtualization  
 
In 1998, VMware solves the problem of virtualizing the old x86 architecture opening a path to a solution to get control over the wasteful nature of IT data centers. This server consolidation effort is what helped establish virtualization as a go-to technology for organizations of all sizes. IT started to notice capital expenditure savings by buying fewer, but higher powered servers to handle the workloads of 15-20 physical servers. Operational expenditure savings was accomplished through reduced power consumption required for powering and cooling servers. It was the realization that virtualization provided a platform for simplified availability and recoverability. Virtualization offered a more responsive and sustainable IT infrastructure that afforded new opportunities to either keep critical workloads running, or recover them more quickly than ever in the event of a more catastrophic failure.

Previous PostNext Post

The Evolution of Computing: The Internet Datacenter


The boom of datacenters and datacenter hosting came during the dot-com era. Countless businesses needed nonstop operation and fast Internet connectivity to deploy systems and establish a presence on the Web. Installing data center hosting equipment was not a viable option for smaller companies. As the dot com bubble grew, companies began to understand the importance of having an Internet presence. Establishing this presence required that companies have fast and reliable Internet connectivity. They also had to have the capability to operate 24 hours a day in order to deploy new systems.

   Data Center  

Soon, these new requirements resulted in the construction of extremely large data facilities, responsible for the operation of computer systems within a company and the deployment of new systems. However, not all companies could afford to operate a huge datacenter. The physical space, equipment requirements, and highly-trained staff made these large datacenters extremely expensive and sometimes impractical. In order to respond to this demand, many companies began building large facilities, called Internet Datacenters, which provided businesses of all sizes with a wide range of solutions for operations and system deployment.

   Datacenter  
 
New technologies and practices were designed and implemented to handle the operation requirements and scale of such large-scale operations. These large datacenters revolutionized technologies and operating practices within the industry. Private datacenters were born out of this need for an affordable Internet datacenter solution. Today's private datacenters allow small businesses to have access to the benefits of the large Internet data centers without the expense of upkeep and the sacrifice of valuable physical space.
 
Previous Post – Next Post
 
 

The Evolution of Computing: Distributed Computing


After the microcomputers, came the world of distributed systems. One important characteristic of the distributed computing environment was that all of the major OSs were available on small, low-cost servers. This feature meant that it was easy for various departments or any other corporate group to purchase servers outside the control of the traditional, centralized IT environment. As a result, applications often just appeared without following any of the standard development processes. Engineers programmed applications on their desktop workstations and used them for what later proved to be mission-critical or revenue-sensitive purposes. As they shared applications with others in their departments, their workstations became servers that served many people within the organization.

  Server Mess
 
In the distributed computing environment, it was common for applications to be developed following a one-application-to-one-server model. Because funding for application development comes from vertical business units, and they insist on having their applications on their own servers, each time an application is put into production, another server is added. The problem created by this approach is significant because the one-application-to-one-server model is really a misnomer. In reality, each new application generally requires the addition of at least three new servers, and often requires more as follows: development servers, test servers, training servers and cluster and disaster recovery servers.
 
  Messy Servers
 
Therefore, it became standard procedure in big corporations to purchase 8 or 10 servers for every new application being deployed. It was the prelude for the enormous bubble that ultimately would cause the collapse of many organization who thought cyberspace was an easy and limitless way to make money.
 
Previous Post  - Next Post

The Evolution of Computing: Personal Computing


Initially, companies developed applications on minicomputers because it gave them more freedom than they had in the mainframe environment. The rules and processes used in this environment were typically more flexible than those in the mainframe environment, giving developers freedom to be more creative when writing applications. In many ways, minis were the first step towards freedom from mainframe computing. However, with each computer being managed the way its owner chose to manage it, a lack of accepted policies and procedures often led to a somewhat chaotic environment. Further, because each mini vendor had its own proprietary OS, programs written for one vendor's mini were difficult to port to another mini. In most cases, changing vendors meant rewriting applications for the new OS. This lack of application portability was a major factor in the demise of the mini.

During the 1980s, the computer industry experienced the boom of the microcomputer era. In the excitement accompanying this boom, computers were installed everywhere, and little thought was given to the specific environmental and operating requirements of the machines. From this point on, computing that was previously done in terminals that served only to interact with the mainframe — the so called “stupid terminals”— shall be made on personal computers, or machines that have their own resources. This new computing model was the embryo of modern cyberspace with all the services that we know today.
 
IBM PC 5150