-->
Showing posts with label Computers. Show all posts
Showing posts with label Computers. Show all posts

Windows Server 2012 Hardening (Part II)

Using the Security and Configuration Analysis

Microsoft provides security templates for Windows Server and client operating systems, containing security configuration designed for different scenarios and server roles. There are some security templates that are part of the operating system and get applied during different operations, such as when promoting a server to a domain controller.

In Windows Server 2008 and later versions, security templates are located in %systemroot%inf and are more limited than in Windows Server 2003. Templates include:

  • Defltbase.inf (baseline)
  • Defltsv.inf (web/file/print servers)
  • DCfirst.inf (for the first domain controller in a domain)
  • Defltdc.inf (other domain controllers)

Basically, you should repeat the procedures already explained for Windows 7 with two different tools, but instead of loading the .inf from the STIG now you load one of the security templates shipped with Windows Server 2012.

Analyze the baseline template with the Policy Analyzer

Add the baseline template

image

Windows Server 2012 Hardening (Part I)

Servers are the penultimate layer of security between potential threats and your organization’s data. Therefore, applying proper security policies specifically for each server profile is both important and necessary.

Common sense recommendations are to "stop all unnecessary services" or "turn off unused features". Fortunately, every new version of Windows Server is built to be more secure by default. That said, it is common to have several of different roles assigned to a single server as well as multiple sets of file servers, web servers, database servers, etc.  So, how can we guarantee that each of these servers, with their different characteristics, is configured in compliance with the best security practices?

Using the Security Compliance Manager

Using SCM in Windows Server is basically the same as using it on a workstation. The major difference is related to what you can do with your GPOs once you are done.

You cannot install SCM 4 on a Windows Server 2012 just like that, you’ll probably get a warning from the Program Compatibility Assistant. This is a known issue when installing SQL Server 2008 Express, even on supported OSes.

Besides, Windows Server is not on the list of SCM 4 supported OSes…

image

To overcome this, install a newer version of SQL Server, like SQL Server 2014 Express, before installing SCM and everything will go smoothly.

The procedure will be exactly the same as what we did for Windows 10, but now we are going to do same extra steps.

GPEdit vs SecPol

Many users have questions regarding the difference between Local Group Policy Editor (gpedit.msc) and the Local Security Policy (secpol.msc) but there is not nothing mysterious about these two tools.

Both are used for administering system and security policies on your computer. The difference between the two is most visible on the scope of policies which those tools can edit.

To start explaining the difference, we can say that the secpol.msc is a subcategory of gpedit.msc.

image

  • Gpedit.msc is a file name for the Group Policy Editor console, mostly a graphical user interface for editing registry entries. This is not very easy because they are located at many places throughout computer registry but this tool makes the administration of registry easier.
  • Secpol.msc is another Windows module that is also used for administration of system settings. The Local Security Policy is a smaller brother to the Group Policy Editor, used to administer a subgroup of what you can administer using the gpedit.msc.

While group policies apply to your computer and users in your domain universally and are often set by your domain administrator from a central location, local security policies, as the name suggests, are relevant to your particular local machine only.

You can see that when opening the Group Policy Editor (gpedit.msc), you get to see more than when opening the Local Security Policy Editor (secpol.msc), and that is the major difference.

  • The gpedit.msc is broader.
  • The secpol.msc is narrower and focuses more on security related registry entries.

Previous post: Windows 10 Hardening (Part II)

Next post: Windows 2012 Hardening (Part I)

Windows 10 Hardening (Part II)

Using the Security Compliance Manager

SCM 4.0 provides ready-to-deploy policies based on Microsoft Security Guide recommendations and industry best practices, allowing you to easily manage configuration drift, and address compliance requirements for Windows operating systems and Microsoft applications.

image

Update baselines

image

Windows 10 Hardening (Part I)

Using the STIG templates

Just like in previous version of Windows, some of the requirements in the Windows 10 STIG depend on the use of additional group policy administrative templates that are not included with Windows by default. The new administrative template files (.admx and .adml file types) must be copied to the appropriate location in the Windows directory to make the settings they provide visible in group policy tools.

This includes settings under MS Security Guide, MSS (Legacy), and the Enhanced Mitigation Experience Toolkit (EMET) tool. The MSS settings have previously been made available through an update of the Windows security options file (sceregvl.inf). This required a change in permissions to that file, which is typically controlled by the system. A custom template was developed to avoid this.

The custom template files (MSS-Legacy and SecGuide) are provided in the Templates directory of the STIG package. The EMET administrative template files are located in the tool’s installation directory, typically “\Program Files (x86)\EMET x.x\Deployment\Group Policy Files\”.

The .admx files must be copied to the \Windows\PolicyDefinitions\ directory. The .adml files must be copied to the \Windows\PolicyDefinitions\en-US\ directory.

NOTE: EMET’s end of life date is being extended until July 31, 2018a and at this time there are no plans to offer support or security patching for EMET that date. For improved security, everyone should migrate to the latest version of Windows 10. EMET 5.5 is compatible with current versions of Windows 10 but according to this article, it won’t be compatible with future versions of the latest Microsoft OS.

Before the installation of the STIG templates, Windows 10 Enterprise has:

  • 2283 Computer configuration settings
  • 1731 User configuration settings

image

 

Linux Hardening with OpenSCAP

The OpenSCAP project is a collection of open source tools for implementing and enforcing this standard, and has been awarded the SCAP 1.2 certification by NIST in 2014. The project provides tools that are free to use anywhere you like, for any purpose.

The OpenSCAP basic tools are:

  • OpenSCAP Base
    • Provides a command line tool which enables various SCAP capabilities such as displaying the information about specific security content, vulnerability and configuration scanning, or converting between different SCAP formats.
  • SCAP Workbench
    • User friendly graphical utility offering an easy way to tailor SCAP content to your needs, perform local or remote scans, and export results.

Linux Hardening with OpenVAS

The Open Vulnerability Assessment System (OpenVAS) is a framework of several services and tools offering a comprehensive and powerful vulnerability scanning and management solution.

image

    The security scanner is accompanied with a regularly updated feed of Network Vulnerability Tests (NVTs), over 51,000 in total (as of February 2017).

    OpenVAS Features

    The OpenVAS security suite consists of three parts:

    • OpenVAS Scanner
      • The actual scanner that executes the real-time vulnerability tests;
      • It can handle more than one target host at a time;
      • Uses the OpenVAS Transfer Protocol (OTP);
      • OTP supports SSL.
    • OpenVAS Manager
      • Handles the SQL Database where all scanning results and configurations are stored;
      • Controls the scanner via OTP and offers XML based OpenVAS Management Protocol (OMP);
      • It can stop, pause or resume scanning operations;
      • Makes user management possible including group level management and access control management.
    • OpenVAS CLI
      • Command line tool acting as a client for OMP.

    Linux Hardening with Lynis

    Lynis is a powerful open source auditing tool for Unix/Linux like operating systems. It scans the system for security information, general system information, installed software information, configuration mistakes, security issues, user accounts without password, wrong file permissions, firewall auditing, etc.

    Lynis is also one of the most trusted automated auditing tools for software patch management, malware scanning and vulnerability detecting in Unix/Linux based systems. This tool is useful for auditors, network and system administrators, security specialists and penetration testers.

    Installing Lynis in Ubuntu

    This application doesn’t require any installation, it can be used directly from any directory. So, it’s a good idea to create a custom directory for Lynis:

    sudo mkdir /usr/local/lynis

    Download the stable version of Lynis from the website and unpack it:

    cd /usr/local/lynis

    sudo wget https://cisofy.com/files/lynis-2.4.0.tar.gz

    image

    Linux Hardening with Tiger

    Tiger is a security tool that can be used both as a security audit and as an IDS. It supports multiple UNIX platforms and it is free and provided under a GPL license.

    image

      Check all the details on the official website.

      Installing Tiger in Ubuntu

      Install the application by running the command:

      sudo apt-get install tiger

      image

      Windows 7 Hardening (Part II)

      Enhanced Mitigation Experience Toolkit

      EMET is a free tool built to offer additional security defenses against vulnerable third party applications and assorted vulnerabilities. EMET helps prevent vulnerabilities in software from being successfully exploited by using security mitigation technologies. These technologies function as special protections and obstacles that an exploit author must defeat to exploit software vulnerabilities. These security mitigation technologies work to make exploitation as difficult as possible to perform but do not guarantee that vulnerabilities cannot be exploited.

      Download the tool here

      image

      and the User’s guide here.

      image

      Windows 7 Hardening (Part I)

      Using Microsoft Security Baseline Analyzer

      Download MSBA 2.3. Install it and start a default scan on your Windows machine:

      image

      Typical results:

      image

      • Analyze the report and the proposed solutions.
      • Enable the IIS Windows feature.
      • Repeat the MSBA scan
      • Analyze the new report an compare it with the previous one.

      System Hardening

      System hardening refers to providing various means of protection in a computer system, eliminating as many security risks as possible. This is usually done by removing all non-essential software programs and utilities from the computer. While these programs may offer useful features to the user, they might provide "back-door" access to the system and thus must be removed to improve system security.

      Extended system protection should be provided at various levels and is often referred to as defense in depth. Protecting in levels means to protect at the host layer, the application layer, the operating system layer, the data layer, the physical layer and all the sub layers in between. Each one of these layers requires a unique method of security.

       

      Security Content Automation Protocol

      SCAP is a method for using commonly accepted standards to enable automated vulnerability management and security policy compliance metrics. It started as a collection of specifications originally created by the US government which are now an industry standard.

      It was developed through the cooperation and collaboration of public and private sector organizations, including government, industry and academia, but the standard is still maintained by the the US National Institute of Standards and Technology.

       

      Benefits of SCAP

      Automated tools that use SCAP specifications make it easier to continuously verify the security compliance status of a wide variety of IT systems. The use of standardized, automated methods for system security management can help organizations operate more effectively in complex, interconnected environments and realize cost savings.

      SCAP Components

      • CVE - Common Vulnerabilities and Exposures
        • Catalog of known security threats
      • CCE - Common Configuration Enumeration
        • List of “identifiers” and entries relating to security system configuration issues
        • Common identification enables correlation
      • CPE - Common Platform Enumeration
        • Structured naming scheme to describe systems, platforms, software
      • CVSS - Common Vulnerability Scoring System
        • Framework to describe the characteristics and impacts of IT vulnerabilities.
      • XCCDF - eXtensible Configuration Checklist Description Format
        • Security checklists, benchmarks and configuration documentation in XML format. 
      • OVAL - Open Vulnerability and Assessment Language
        • Common language for assessing the status of a vulnerability
      • OCIL – Open Checklist Interactive Language
        • Common language to express questions to be presented to a user and interpret responses
      • Asset Identification
        • This specification describes the purpose of asset identification, a data model and methods for identifying assets, and guidance on how to use asset identification.
      • ARF - Asset Reporting Format
        • Data model to express the transport format of information about assets, and the relationships between assets and reports.
      • CCSS - Common Configuration Scoring System
        • Set of measures of the severity of software security configuration issues
      • TMSAD - Trust Model for Security Automation Data
        • Common trust model that can be applied to specifications within the security automation domain.

      image

      Security Baselines

      US Government Configuration Baseline

      The purpose of USGCB initiative is to create security configuration baselines for Information Technology products widely deployed across the federal agencies.

      The USGCB is a Federal government-wide initiative that provides guidance to agencies on what should be done to improve and maintain an effective configuration settings focusing primarily on security.

      IT-Grundschutz

      The aim of IT-Grundschutz is to achieve an appropriate security level for all types of information of an organization. IT-Grundschutz uses a holistic approach to this process.

      Through proper application of well-proven technical, organizational, personnel, and infrastructural safeguards, a security level is reached that is suitable and adequate to protect business-related information having normal protection requirements. In many areas, IT-Grundschutz even provides advice for IT systems and applications requiring a high level of protection.

      There are also the IT-Grundschutz Catalogues where you will find modules, threats and safeguards.

      CERN Mandatory Security Baselines

      The Security Baselines define a set of basic security objectives which must be met by any given service or system.

      The objectives are chosen to be pragmatic and complete, and do not impose technical means.

      Therefore, details on how these security objectives are fulfilled by a particular service/system must be documented in a separate "Security Implementation Document".

      Microsoft Security Baselines

      A security baseline is a collection of settings that have a security impact and include Microsoft’s recommended value for configuring those settings along with guidance on the security impact of those settings.

      These settings are based on feedback from Microsoft security engineering teams, product groups, partners, and customers.

      Cisco Network Security Baseline

      Developing and deploying a security baseline can, be challenging due to the vast range of features available

      The Network Security Baseline is designed to assist in this endeavor by outlining those key security elements that should be addressed in the first phase of implementing defense-in-depth.

      The main focus of Network Security Baseline is to secure the network infrastructure itself: the control and management planes.

       

      Security Standards

      These are common industry-accepted standards that include specific weakness-correcting guidelines. The main ones are published by the following organizations:

       

      Center for Internet Security

      CIS Benchmarks are recommended technical settings for operating systems, middleware and software applications, and network devices. Developed in a unique consensus-based process comprised of hundreds of security professionals worldwide as de facto, best-practice configuration standards.

       

      International Organization for Standardization

      ISO/IEC 27002:2013 gives guidelines for organizational information security standards and information security management practices including the selection, implementation and management of controls taking into consideration the organization's information security risk environment(s).

       

      National Institute of Standards and Technology

      The National Checklist Program (NCP), defined by the NIST SP 800-70 Rev. 3, is the U.S. government repository of publicly available security checklists (or benchmarks) that provide detailed low level guidance on setting the security configuration of operating systems and applications. NCP is migrating its repository of checklists to conform to the SCAP thus allowing standards based security tools to automatically perform configuration checking using NCP checklists.

       

      Defense Information Systems Agency

      The Security Technical Implementation Guides (STIGs) and the NSA Guides are the configuration standards for DoD Information Assurance (IA) and IA-enabled devices/systems. The STIGs contain technical guidance to "lock down" information systems/software that might otherwise be vulnerable to a malicious computer attack.

       

      Bundesamt für Sicherheit in der Informationstechnik

      The BSI Standards contain recommendations on methods, processes, procedures, approaches and measures relating to information security.

       

      Compliance Requirements

      Any organization managing payments, handling private customer data, or operate in markets controlled by security regulations, need to demonstrate security compliance to avoid penalties and meet customer expectations. These are some of the major compliance requirements:

       

      Payment Card Industry Data Security Standard

      The PCI DSS is a set of security standards designed to ensure that all companies that accept, process, store or transmit credit card information maintain a secure environment. It was launched on September 7, 2006 to manage the ongoing evolution of the Payment Card Industry (PCI) security standards with a focus on improving payment account security throughout the transaction process.

       

      Health Insurance Portability and Accountability Act

      The HIPAA Privacy Rule, also called the Standards for Privacy of Individually Identifiable Health Information, essentially defines how healthcare provider entities use individually-identifiable health information or the PHI (Personal Health Information).

       

      Information Technology Infrastructure Library 

      ITIL compliance guidelines include categories such as change management, security architecture and help desk systems. Companies can then find ways to accomplish ITIL compliance by using the appropriate systems and strategies.

       

      Control Objectives for Information and Related Technology

      COBIT is a framework created for IT governance and management. It is meant to be a supportive tool for managers and allows bridging the crucial gap between technical issues, business risks and control requirements.

       

      National Institute of Standards and Technology

      The NIST is responsible for developing cybersecurity standards, guidelines, tests, and metrics for the protection of federal information systems. While developed for federal agency use, these resources are voluntarily adopted by other organizations because they are effective and accepted globally.

      Next post: Windows 7 Hardening (Part 1)

      Creating Virtual Machines in Windows 10

       
      Once you are done with the installation of Hyper-V, the creation of VMs is an easy procedure. First, you'll have to locate the Hyper-V manager icon and I suggest you place it in an easily accessible spot:
       
      Hyper-V Manager Icon
       
      Now, all you have to do is start the Hyper-V manager and you'll be presented with an interface apparently identical to the one previously available in Server 2012.

      Hyper-V Manager
       
      However, this modern hypervisor has at least one option worthy of separate explanation and that is the Second Generation Virtual Machines.

      Virtualization with Windows 10

       
      Many versions of Windows 10 and Windows 8.x include the Hyper-V virtualization technology. It is the same virtualization technology previously available only in Windows Server but this desktop version is referred to as Client Hyper-V. As in server versions, it is a Type 1 hypervisor which enables you to run more than one 32-bit or 64-bit virtualized operating system at the same time on top of a single physical host.

      Hyper V v10

      The technical approach remains pretty much the same as it was with Windows Server 2008 but a number of other features are now available.
       
      Window 2010 Hyper-V
       
      The management of the VMs created inside the Client Hyper-V can be performed using tools created for Server Hyper-V, such as VMM P2V or Sysinternals Disk2vhd, and Hyper-V virtual switch extensions and PowerShell scripts for managing VMs that you develop and test on Client Hyper-V, can be latter on moved to Server Hyper-V.

      Cyberspace’s Ecological Impact

      Electricity consumption in data centers worldwide doubled between 2000 and 2005, but the pace of growth slowed between 2005 and 2010. This slowdown was the result of the 2008 economic crisis, the increasing use of virtualization in datacenters, and the industry's efforts to improve energy efficiency. However, the electricity consumed by datacenters globally in 2010 amounted to 1.3% of the world electricity use. Power consumption is now a major concern in the design and implementation of modern infrastructures because energy-related costs have become an important component of the total cost of ownership of this class of systems.

      Thus, energy management is now a central issue for servers and datacenter operations, focusing on reducing all energy-related costs, such as investment, operating expenses and environmental impacts. The improvement of energy efficiency is a major problem in cloud computing because it has been calculated that the cost of powering and cooling a datacenter accounts for 53% of its total operational expenditure. But the pressure to provide services without any failure leads to a continued scaling systems for all levels of the power hierarchy, from the primary feed sources to the support. In order to cover the worst-case situations, it is normal to over-provision Power Distribution Units (PDUs), Uninterrupted Power Supply (UPS) units, etc. For example, it has been estimated that power over-provisioning in Google data centers is about 40%.

      Cyberspace

      Furthermore, in an attempt to ensure the redundancy of power systems, banks of diesel generators are kept running permanently to ensure that the system does not fail even the moments that these support systems would take to boot up. These giant generators work continuously to ensure high availability in the event of a failure of any critical system, emitting large quantities of diesel exhaust, i.e., pollution. Thus, it is estimated that only about 9% of the energy consumed by datacenters is in fact used in computing operations, everything else is basically wasted to keep the servers ready to respond to any unforeseen power failure.

      When we connect to the Internet, cyberspace can resemble a lot to outer space in the sense that it seems infinite and ethereal; the information is just out there. But if we think about the energy of the real world and the physical space occupied by the Internet, we will begin to understand that things are not so simple. Cyberspace has indeed real expression in the physical space, and the longer it takes to change our behavior in relation to the Internet, in order to clearly see its physical characteristics, the closer we will be to enter a path of destruction of our planet.

      Previous Post – Next Post

      Cyberspace's Social Impact

      Despite being fashionable and many people refer to it, only a few seem to know what the "cloud" really is. A recent study by Wakefield Research for Citrix, shows that there is a huge difference between what U.S. citizens do and what they say when it comes to cloud computing. The survey of more than 1,000 American adults was conducted in August 2012 and showed that few average Americans know what cloud computing is.

      For example, when asked what "the cloud" is, a majority responded it's either an actual cloud, the sky or something related to the weather (29%). 51 percent of respondents, believe stormy weather can interfere with cloud computing and only 16% were able to link the term with the notion of a computer network to store, access and share data from Internet-connected devices. Besides, 54% of respondents claimed to have never used a cloud when in fact 95% of those who said so are actually using cloud services today via online shopping, banking, social networking and file sharing.

      Cloud Computing

      What these results suggest is that the cloud is indeed transparent to users, fulfilling one of its main functions, which is provide content and services easily and immediately. However, the lack of knowledge about the computing model that supports all of our everyday activities, leads to a growing disengagement with the consequent deterioration of the security concerns of content and privacy.
      In reality, cyberspace is not an aseptic place filled only with accurate and useful information. The great interest of cyberspace lies precisely in that it allows for social vitality, based on a growing range of multimedia services. Its fascination comes from acting as a booster technology for the proliferation of all forms of sociability, being a connectivity instrument. Therefore, cyberspace is not a purely cybernetic thing, but a living, chaotic, and uncontrolled entity.

      Beyond these concerns, others equally serious are emerging. By analyzing our daily use of these new technological tools, we conclude that the growth of the Internet is suffocating the planet. We have to face the CO2 emissions produced by our online activities as internal costs to the planet.
      We can start by showing some awareness of the problem, restricting our uploads and even removing some. Why not? What about reducing our photos on Facebook and Instagram? Keeping them permanently available consumes energy! If no one cares about our videos on YouTube, why not delete them? At least keep them where they do not need to be consuming energy.

      We still have to go further and think that if awareness and self-discipline are not enough, we must consider the possibility of a cost for the sharing of large volumes of personal information. It is perhaps the only way to get most people to stop making unconscious use of the cloud, clogging it by dumping huge amounts of useless information into cyberspace. The goal is not to limit the access to information, this should always be open access, but rather give it a proper and conscientious use.

      Next Post

      The Evolution of Computing: Cloud Computing


      Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. Cloud computing is first and foremost a concept of distributed resource management and utilization. It aims at providing convenient endpoint access system while not requiring purchase of software, platform or physical network infrastructure, instead outsourcing them from third parties.

      The arrangement may beneficially influence competitive advantage and flexibility but it also brings about various challenges, namely privacy and security. In cloud computing, applications, computing and storage resources live somewhere in the network, or cloud. User’s don’t worry about the location and can rapidly access as much or as little of the computing, storage and networking capacity as they wish—paying for it by how much they use—just as they would with water or electricity services provided by utility companies. The cloud is currently based on disjointedly operating data centers but the idea of a unifying platform not unlike the Internet has already been proposed.

          Cloud Computing 
       
      In a cloud computing environment, the traditional role of service provider is divided into two: the infrastructure providers who manage cloud platforms and lease resources according to a usage-based pricing model, and service providers, who rent resources from one or many infrastructure providers to serve the end users. Cloud computing providers offer their services according to several fundamental models: software as a service, infrastructure as a service, platform as a service, desktop as a service, and more recently, backend as a service.
       
      The backend as a service computing model, also known as "mobile backend as a service" is a relatively recent development in cloud computing, with most commercial services dating from 2011. This is a model for providing web and mobile applications developers with a way to link their applications to backend cloud storage while also providing features such as user management, push notifications, and integration with social networking services. These services are provided via the use of custom software development kits (SDKs) and application programming interfaces (APIs). Although similar to other cloud-computing developer tools, this model is distinct from these other services in that it specifically addresses the cloud-computing needs of web and mobile applications developers by providing a unified means of connecting their apps to cloud services. The global market for this services has an estimated value of hundreds of million dollars in the next years.
       
            Cloud_Computing
       
      Clearly, public cloud computing is at an early stage in its evolution. However, all of the companies offering public cloud computing services have data centers, in fact, they are building some of the largest data centers in the world. They all have network architectures that demand flexibility, scalability, low operating cost, and high availability. They are built on top of products and technologies supplied by Brocade and others network vendors. These public cloud companies are building business on data center designs that virtualize computing, storage, and network equipment—which is the foundation of their IT investment. Cloud computing over the Internet is commonly called “public cloud computing.” When used in the data center, it is commonly called “private cloud computing.” The difference lies in who maintains control and responsibility for servers, storage, and networking infrastructure and ensures that application service levels are met. In public cloud computing, some or all aspects of operations and management are handled by a third party “as a service.” Users can access an application or computing and storage using the Internet and the HTTP address of the service.

      Previous Post

      The Evolution of Computing: Virtualization


      Countless PCs in organizations effectively killed the need for virtualization as a multi-tasking enabled solution in the 1980s. At that time, virtualization was widely abandoned and not picked up until the late 1990s again, when the technology would find a new use and purpose. The opportunity of a booming PC and datacenter industry brought an unprecedented increase in the need for computer space, as well as in the cost of power to support these installations. Back in 2002, data centers already accounted for 1.5 percent of the total U.S. power consumption and was growing by an estimated 10 percent every year. More than 5 million new servers were deployed every year and added a power supply of thousands of new homes every year. As experts warned of excessive power usage, hardware makers began focusing on more power efficient components to enable growth for the future and alleviate the need for data center cooling. Data center owners began developing smart design approaches to make the cooling and airflow in data centers more efficient.

        Datacenter Power&Cooling  
       
      At this time, most computing was supported by the highly inefficient x86-based IT model, originally created by Intel in 1978. Cheap hardware created the habit of over-provisioning and under-utilizing. Any time a new application was needed, it often required multiple systems for development and production use. Take this concept and multiply it out by a few servers in a multi-tier application, and it wasn't uncommon to see 8-10 new servers ordered for every application that was required. Most of these servers went highly underutilized since their existence was based on a non-regular testing schedule. It also often took a relatively intensive application to even put a dent in the total utilization capacity of a production server.  
       
       Server Virtualization  
       
      In 1998, VMware solves the problem of virtualizing the old x86 architecture opening a path to a solution to get control over the wasteful nature of IT data centers. This server consolidation effort is what helped establish virtualization as a go-to technology for organizations of all sizes. IT started to notice capital expenditure savings by buying fewer, but higher powered servers to handle the workloads of 15-20 physical servers. Operational expenditure savings was accomplished through reduced power consumption required for powering and cooling servers. It was the realization that virtualization provided a platform for simplified availability and recoverability. Virtualization offered a more responsive and sustainable IT infrastructure that afforded new opportunities to either keep critical workloads running, or recover them more quickly than ever in the event of a more catastrophic failure.

      Previous PostNext Post

      The Evolution of Computing: The Internet Datacenter


      The boom of datacenters and datacenter hosting came during the dot-com era. Countless businesses needed nonstop operation and fast Internet connectivity to deploy systems and establish a presence on the Web. Installing data center hosting equipment was not a viable option for smaller companies. As the dot com bubble grew, companies began to understand the importance of having an Internet presence. Establishing this presence required that companies have fast and reliable Internet connectivity. They also had to have the capability to operate 24 hours a day in order to deploy new systems.

         Data Center  

      Soon, these new requirements resulted in the construction of extremely large data facilities, responsible for the operation of computer systems within a company and the deployment of new systems. However, not all companies could afford to operate a huge datacenter. The physical space, equipment requirements, and highly-trained staff made these large datacenters extremely expensive and sometimes impractical. In order to respond to this demand, many companies began building large facilities, called Internet Datacenters, which provided businesses of all sizes with a wide range of solutions for operations and system deployment.

         Datacenter  
       
      New technologies and practices were designed and implemented to handle the operation requirements and scale of such large-scale operations. These large datacenters revolutionized technologies and operating practices within the industry. Private datacenters were born out of this need for an affordable Internet datacenter solution. Today's private datacenters allow small businesses to have access to the benefits of the large Internet data centers without the expense of upkeep and the sacrifice of valuable physical space.
       
      Previous Post – Next Post
       
       

      The Evolution of Computing: Distributed Computing


      After the microcomputers, came the world of distributed systems. One important characteristic of the distributed computing environment was that all of the major OSs were available on small, low-cost servers. This feature meant that it was easy for various departments or any other corporate group to purchase servers outside the control of the traditional, centralized IT environment. As a result, applications often just appeared without following any of the standard development processes. Engineers programmed applications on their desktop workstations and used them for what later proved to be mission-critical or revenue-sensitive purposes. As they shared applications with others in their departments, their workstations became servers that served many people within the organization.

        Server Mess
       
      In the distributed computing environment, it was common for applications to be developed following a one-application-to-one-server model. Because funding for application development comes from vertical business units, and they insist on having their applications on their own servers, each time an application is put into production, another server is added. The problem created by this approach is significant because the one-application-to-one-server model is really a misnomer. In reality, each new application generally requires the addition of at least three new servers, and often requires more as follows: development servers, test servers, training servers and cluster and disaster recovery servers.
       
        Messy Servers
       
      Therefore, it became standard procedure in big corporations to purchase 8 or 10 servers for every new application being deployed. It was the prelude for the enormous bubble that ultimately would cause the collapse of many organization who thought cyberspace was an easy and limitless way to make money.
       
      Previous Post  - Next Post