-->
Showing posts with label Servers. Show all posts
Showing posts with label Servers. Show all posts

Windows Server 2012 Hardening (Part II)

Using the Security and Configuration Analysis

Microsoft provides security templates for Windows Server and client operating systems, containing security configuration designed for different scenarios and server roles. There are some security templates that are part of the operating system and get applied during different operations, such as when promoting a server to a domain controller.

In Windows Server 2008 and later versions, security templates are located in %systemroot%inf and are more limited than in Windows Server 2003. Templates include:

  • Defltbase.inf (baseline)
  • Defltsv.inf (web/file/print servers)
  • DCfirst.inf (for the first domain controller in a domain)
  • Defltdc.inf (other domain controllers)

Basically, you should repeat the procedures already explained for Windows 7 with two different tools, but instead of loading the .inf from the STIG now you load one of the security templates shipped with Windows Server 2012.

Analyze the baseline template with the Policy Analyzer

Add the baseline template

image

Windows Server 2012 Hardening (Part I)

Servers are the penultimate layer of security between potential threats and your organization’s data. Therefore, applying proper security policies specifically for each server profile is both important and necessary.

Common sense recommendations are to "stop all unnecessary services" or "turn off unused features". Fortunately, every new version of Windows Server is built to be more secure by default. That said, it is common to have several of different roles assigned to a single server as well as multiple sets of file servers, web servers, database servers, etc.  So, how can we guarantee that each of these servers, with their different characteristics, is configured in compliance with the best security practices?

Using the Security Compliance Manager

Using SCM in Windows Server is basically the same as using it on a workstation. The major difference is related to what you can do with your GPOs once you are done.

You cannot install SCM 4 on a Windows Server 2012 just like that, you’ll probably get a warning from the Program Compatibility Assistant. This is a known issue when installing SQL Server 2008 Express, even on supported OSes.

Besides, Windows Server is not on the list of SCM 4 supported OSes…

image

To overcome this, install a newer version of SQL Server, like SQL Server 2014 Express, before installing SCM and everything will go smoothly.

The procedure will be exactly the same as what we did for Windows 10, but now we are going to do same extra steps.

GPEdit vs SecPol

Many users have questions regarding the difference between Local Group Policy Editor (gpedit.msc) and the Local Security Policy (secpol.msc) but there is not nothing mysterious about these two tools.

Both are used for administering system and security policies on your computer. The difference between the two is most visible on the scope of policies which those tools can edit.

To start explaining the difference, we can say that the secpol.msc is a subcategory of gpedit.msc.

image

  • Gpedit.msc is a file name for the Group Policy Editor console, mostly a graphical user interface for editing registry entries. This is not very easy because they are located at many places throughout computer registry but this tool makes the administration of registry easier.
  • Secpol.msc is another Windows module that is also used for administration of system settings. The Local Security Policy is a smaller brother to the Group Policy Editor, used to administer a subgroup of what you can administer using the gpedit.msc.

While group policies apply to your computer and users in your domain universally and are often set by your domain administrator from a central location, local security policies, as the name suggests, are relevant to your particular local machine only.

You can see that when opening the Group Policy Editor (gpedit.msc), you get to see more than when opening the Local Security Policy Editor (secpol.msc), and that is the major difference.

  • The gpedit.msc is broader.
  • The secpol.msc is narrower and focuses more on security related registry entries.

Previous post: Windows 10 Hardening (Part II)

Next post: Windows 2012 Hardening (Part I)

Windows 7 Hardening (Part II)

Enhanced Mitigation Experience Toolkit

EMET is a free tool built to offer additional security defenses against vulnerable third party applications and assorted vulnerabilities. EMET helps prevent vulnerabilities in software from being successfully exploited by using security mitigation technologies. These technologies function as special protections and obstacles that an exploit author must defeat to exploit software vulnerabilities. These security mitigation technologies work to make exploitation as difficult as possible to perform but do not guarantee that vulnerabilities cannot be exploited.

Download the tool here

image

and the User’s guide here.

image

Windows 7 Hardening (Part I)

Using Microsoft Security Baseline Analyzer

Download MSBA 2.3. Install it and start a default scan on your Windows machine:

image

Typical results:

image

  • Analyze the report and the proposed solutions.
  • Enable the IIS Windows feature.
  • Repeat the MSBA scan
  • Analyze the new report an compare it with the previous one.

System Hardening

System hardening refers to providing various means of protection in a computer system, eliminating as many security risks as possible. This is usually done by removing all non-essential software programs and utilities from the computer. While these programs may offer useful features to the user, they might provide "back-door" access to the system and thus must be removed to improve system security.

Extended system protection should be provided at various levels and is often referred to as defense in depth. Protecting in levels means to protect at the host layer, the application layer, the operating system layer, the data layer, the physical layer and all the sub layers in between. Each one of these layers requires a unique method of security.

 

Security Content Automation Protocol

SCAP is a method for using commonly accepted standards to enable automated vulnerability management and security policy compliance metrics. It started as a collection of specifications originally created by the US government which are now an industry standard.

It was developed through the cooperation and collaboration of public and private sector organizations, including government, industry and academia, but the standard is still maintained by the the US National Institute of Standards and Technology.

 

Benefits of SCAP

Automated tools that use SCAP specifications make it easier to continuously verify the security compliance status of a wide variety of IT systems. The use of standardized, automated methods for system security management can help organizations operate more effectively in complex, interconnected environments and realize cost savings.

SCAP Components

  • CVE - Common Vulnerabilities and Exposures
    • Catalog of known security threats
  • CCE - Common Configuration Enumeration
    • List of “identifiers” and entries relating to security system configuration issues
    • Common identification enables correlation
  • CPE - Common Platform Enumeration
    • Structured naming scheme to describe systems, platforms, software
  • CVSS - Common Vulnerability Scoring System
    • Framework to describe the characteristics and impacts of IT vulnerabilities.
  • XCCDF - eXtensible Configuration Checklist Description Format
    • Security checklists, benchmarks and configuration documentation in XML format. 
  • OVAL - Open Vulnerability and Assessment Language
    • Common language for assessing the status of a vulnerability
  • OCIL – Open Checklist Interactive Language
    • Common language to express questions to be presented to a user and interpret responses
  • Asset Identification
    • This specification describes the purpose of asset identification, a data model and methods for identifying assets, and guidance on how to use asset identification.
  • ARF - Asset Reporting Format
    • Data model to express the transport format of information about assets, and the relationships between assets and reports.
  • CCSS - Common Configuration Scoring System
    • Set of measures of the severity of software security configuration issues
  • TMSAD - Trust Model for Security Automation Data
    • Common trust model that can be applied to specifications within the security automation domain.

image

Security Baselines

US Government Configuration Baseline

The purpose of USGCB initiative is to create security configuration baselines for Information Technology products widely deployed across the federal agencies.

The USGCB is a Federal government-wide initiative that provides guidance to agencies on what should be done to improve and maintain an effective configuration settings focusing primarily on security.

IT-Grundschutz

The aim of IT-Grundschutz is to achieve an appropriate security level for all types of information of an organization. IT-Grundschutz uses a holistic approach to this process.

Through proper application of well-proven technical, organizational, personnel, and infrastructural safeguards, a security level is reached that is suitable and adequate to protect business-related information having normal protection requirements. In many areas, IT-Grundschutz even provides advice for IT systems and applications requiring a high level of protection.

There are also the IT-Grundschutz Catalogues where you will find modules, threats and safeguards.

CERN Mandatory Security Baselines

The Security Baselines define a set of basic security objectives which must be met by any given service or system.

The objectives are chosen to be pragmatic and complete, and do not impose technical means.

Therefore, details on how these security objectives are fulfilled by a particular service/system must be documented in a separate "Security Implementation Document".

Microsoft Security Baselines

A security baseline is a collection of settings that have a security impact and include Microsoft’s recommended value for configuring those settings along with guidance on the security impact of those settings.

These settings are based on feedback from Microsoft security engineering teams, product groups, partners, and customers.

Cisco Network Security Baseline

Developing and deploying a security baseline can, be challenging due to the vast range of features available

The Network Security Baseline is designed to assist in this endeavor by outlining those key security elements that should be addressed in the first phase of implementing defense-in-depth.

The main focus of Network Security Baseline is to secure the network infrastructure itself: the control and management planes.

 

Security Standards

These are common industry-accepted standards that include specific weakness-correcting guidelines. The main ones are published by the following organizations:

 

Center for Internet Security

CIS Benchmarks are recommended technical settings for operating systems, middleware and software applications, and network devices. Developed in a unique consensus-based process comprised of hundreds of security professionals worldwide as de facto, best-practice configuration standards.

 

International Organization for Standardization

ISO/IEC 27002:2013 gives guidelines for organizational information security standards and information security management practices including the selection, implementation and management of controls taking into consideration the organization's information security risk environment(s).

 

National Institute of Standards and Technology

The National Checklist Program (NCP), defined by the NIST SP 800-70 Rev. 3, is the U.S. government repository of publicly available security checklists (or benchmarks) that provide detailed low level guidance on setting the security configuration of operating systems and applications. NCP is migrating its repository of checklists to conform to the SCAP thus allowing standards based security tools to automatically perform configuration checking using NCP checklists.

 

Defense Information Systems Agency

The Security Technical Implementation Guides (STIGs) and the NSA Guides are the configuration standards for DoD Information Assurance (IA) and IA-enabled devices/systems. The STIGs contain technical guidance to "lock down" information systems/software that might otherwise be vulnerable to a malicious computer attack.

 

Bundesamt für Sicherheit in der Informationstechnik

The BSI Standards contain recommendations on methods, processes, procedures, approaches and measures relating to information security.

 

Compliance Requirements

Any organization managing payments, handling private customer data, or operate in markets controlled by security regulations, need to demonstrate security compliance to avoid penalties and meet customer expectations. These are some of the major compliance requirements:

 

Payment Card Industry Data Security Standard

The PCI DSS is a set of security standards designed to ensure that all companies that accept, process, store or transmit credit card information maintain a secure environment. It was launched on September 7, 2006 to manage the ongoing evolution of the Payment Card Industry (PCI) security standards with a focus on improving payment account security throughout the transaction process.

 

Health Insurance Portability and Accountability Act

The HIPAA Privacy Rule, also called the Standards for Privacy of Individually Identifiable Health Information, essentially defines how healthcare provider entities use individually-identifiable health information or the PHI (Personal Health Information).

 

Information Technology Infrastructure Library 

ITIL compliance guidelines include categories such as change management, security architecture and help desk systems. Companies can then find ways to accomplish ITIL compliance by using the appropriate systems and strategies.

 

Control Objectives for Information and Related Technology

COBIT is a framework created for IT governance and management. It is meant to be a supportive tool for managers and allows bridging the crucial gap between technical issues, business risks and control requirements.

 

National Institute of Standards and Technology

The NIST is responsible for developing cybersecurity standards, guidelines, tests, and metrics for the protection of federal information systems. While developed for federal agency use, these resources are voluntarily adopted by other organizations because they are effective and accepted globally.

Next post: Windows 7 Hardening (Part 1)

Creating Virtual Machines in Windows 10

 
Once you are done with the installation of Hyper-V, the creation of VMs is an easy procedure. First, you'll have to locate the Hyper-V manager icon and I suggest you place it in an easily accessible spot:
 
Hyper-V Manager Icon
 
Now, all you have to do is start the Hyper-V manager and you'll be presented with an interface apparently identical to the one previously available in Server 2012.

Hyper-V Manager
 
However, this modern hypervisor has at least one option worthy of separate explanation and that is the Second Generation Virtual Machines.

The Evolution of Computing: Cloud Computing


Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. Cloud computing is first and foremost a concept of distributed resource management and utilization. It aims at providing convenient endpoint access system while not requiring purchase of software, platform or physical network infrastructure, instead outsourcing them from third parties.

The arrangement may beneficially influence competitive advantage and flexibility but it also brings about various challenges, namely privacy and security. In cloud computing, applications, computing and storage resources live somewhere in the network, or cloud. User’s don’t worry about the location and can rapidly access as much or as little of the computing, storage and networking capacity as they wish—paying for it by how much they use—just as they would with water or electricity services provided by utility companies. The cloud is currently based on disjointedly operating data centers but the idea of a unifying platform not unlike the Internet has already been proposed.

    Cloud Computing 
 
In a cloud computing environment, the traditional role of service provider is divided into two: the infrastructure providers who manage cloud platforms and lease resources according to a usage-based pricing model, and service providers, who rent resources from one or many infrastructure providers to serve the end users. Cloud computing providers offer their services according to several fundamental models: software as a service, infrastructure as a service, platform as a service, desktop as a service, and more recently, backend as a service.
 
The backend as a service computing model, also known as "mobile backend as a service" is a relatively recent development in cloud computing, with most commercial services dating from 2011. This is a model for providing web and mobile applications developers with a way to link their applications to backend cloud storage while also providing features such as user management, push notifications, and integration with social networking services. These services are provided via the use of custom software development kits (SDKs) and application programming interfaces (APIs). Although similar to other cloud-computing developer tools, this model is distinct from these other services in that it specifically addresses the cloud-computing needs of web and mobile applications developers by providing a unified means of connecting their apps to cloud services. The global market for this services has an estimated value of hundreds of million dollars in the next years.
 
      Cloud_Computing
 
Clearly, public cloud computing is at an early stage in its evolution. However, all of the companies offering public cloud computing services have data centers, in fact, they are building some of the largest data centers in the world. They all have network architectures that demand flexibility, scalability, low operating cost, and high availability. They are built on top of products and technologies supplied by Brocade and others network vendors. These public cloud companies are building business on data center designs that virtualize computing, storage, and network equipment—which is the foundation of their IT investment. Cloud computing over the Internet is commonly called “public cloud computing.” When used in the data center, it is commonly called “private cloud computing.” The difference lies in who maintains control and responsibility for servers, storage, and networking infrastructure and ensures that application service levels are met. In public cloud computing, some or all aspects of operations and management are handled by a third party “as a service.” Users can access an application or computing and storage using the Internet and the HTTP address of the service.

Previous Post

The Evolution of Computing: Virtualization


Countless PCs in organizations effectively killed the need for virtualization as a multi-tasking enabled solution in the 1980s. At that time, virtualization was widely abandoned and not picked up until the late 1990s again, when the technology would find a new use and purpose. The opportunity of a booming PC and datacenter industry brought an unprecedented increase in the need for computer space, as well as in the cost of power to support these installations. Back in 2002, data centers already accounted for 1.5 percent of the total U.S. power consumption and was growing by an estimated 10 percent every year. More than 5 million new servers were deployed every year and added a power supply of thousands of new homes every year. As experts warned of excessive power usage, hardware makers began focusing on more power efficient components to enable growth for the future and alleviate the need for data center cooling. Data center owners began developing smart design approaches to make the cooling and airflow in data centers more efficient.

  Datacenter Power&Cooling  
 
At this time, most computing was supported by the highly inefficient x86-based IT model, originally created by Intel in 1978. Cheap hardware created the habit of over-provisioning and under-utilizing. Any time a new application was needed, it often required multiple systems for development and production use. Take this concept and multiply it out by a few servers in a multi-tier application, and it wasn't uncommon to see 8-10 new servers ordered for every application that was required. Most of these servers went highly underutilized since their existence was based on a non-regular testing schedule. It also often took a relatively intensive application to even put a dent in the total utilization capacity of a production server.  
 
 Server Virtualization  
 
In 1998, VMware solves the problem of virtualizing the old x86 architecture opening a path to a solution to get control over the wasteful nature of IT data centers. This server consolidation effort is what helped establish virtualization as a go-to technology for organizations of all sizes. IT started to notice capital expenditure savings by buying fewer, but higher powered servers to handle the workloads of 15-20 physical servers. Operational expenditure savings was accomplished through reduced power consumption required for powering and cooling servers. It was the realization that virtualization provided a platform for simplified availability and recoverability. Virtualization offered a more responsive and sustainable IT infrastructure that afforded new opportunities to either keep critical workloads running, or recover them more quickly than ever in the event of a more catastrophic failure.

Previous PostNext Post

The Evolution of Computing: The Internet Datacenter


The boom of datacenters and datacenter hosting came during the dot-com era. Countless businesses needed nonstop operation and fast Internet connectivity to deploy systems and establish a presence on the Web. Installing data center hosting equipment was not a viable option for smaller companies. As the dot com bubble grew, companies began to understand the importance of having an Internet presence. Establishing this presence required that companies have fast and reliable Internet connectivity. They also had to have the capability to operate 24 hours a day in order to deploy new systems.

   Data Center  

Soon, these new requirements resulted in the construction of extremely large data facilities, responsible for the operation of computer systems within a company and the deployment of new systems. However, not all companies could afford to operate a huge datacenter. The physical space, equipment requirements, and highly-trained staff made these large datacenters extremely expensive and sometimes impractical. In order to respond to this demand, many companies began building large facilities, called Internet Datacenters, which provided businesses of all sizes with a wide range of solutions for operations and system deployment.

   Datacenter  
 
New technologies and practices were designed and implemented to handle the operation requirements and scale of such large-scale operations. These large datacenters revolutionized technologies and operating practices within the industry. Private datacenters were born out of this need for an affordable Internet datacenter solution. Today's private datacenters allow small businesses to have access to the benefits of the large Internet data centers without the expense of upkeep and the sacrifice of valuable physical space.
 
Previous Post – Next Post
 
 

The Evolution of Computing: Distributed Computing


After the microcomputers, came the world of distributed systems. One important characteristic of the distributed computing environment was that all of the major OSs were available on small, low-cost servers. This feature meant that it was easy for various departments or any other corporate group to purchase servers outside the control of the traditional, centralized IT environment. As a result, applications often just appeared without following any of the standard development processes. Engineers programmed applications on their desktop workstations and used them for what later proved to be mission-critical or revenue-sensitive purposes. As they shared applications with others in their departments, their workstations became servers that served many people within the organization.

  Server Mess
 
In the distributed computing environment, it was common for applications to be developed following a one-application-to-one-server model. Because funding for application development comes from vertical business units, and they insist on having their applications on their own servers, each time an application is put into production, another server is added. The problem created by this approach is significant because the one-application-to-one-server model is really a misnomer. In reality, each new application generally requires the addition of at least three new servers, and often requires more as follows: development servers, test servers, training servers and cluster and disaster recovery servers.
 
  Messy Servers
 
Therefore, it became standard procedure in big corporations to purchase 8 or 10 servers for every new application being deployed. It was the prelude for the enormous bubble that ultimately would cause the collapse of many organization who thought cyberspace was an easy and limitless way to make money.
 
Previous Post  - Next Post

The Evolution of Computing: Personal Computing


Initially, companies developed applications on minicomputers because it gave them more freedom than they had in the mainframe environment. The rules and processes used in this environment were typically more flexible than those in the mainframe environment, giving developers freedom to be more creative when writing applications. In many ways, minis were the first step towards freedom from mainframe computing. However, with each computer being managed the way its owner chose to manage it, a lack of accepted policies and procedures often led to a somewhat chaotic environment. Further, because each mini vendor had its own proprietary OS, programs written for one vendor's mini were difficult to port to another mini. In most cases, changing vendors meant rewriting applications for the new OS. This lack of application portability was a major factor in the demise of the mini.

During the 1980s, the computer industry experienced the boom of the microcomputer era. In the excitement accompanying this boom, computers were installed everywhere, and little thought was given to the specific environmental and operating requirements of the machines. From this point on, computing that was previously done in terminals that served only to interact with the mainframe — the so called “stupid terminals”— shall be made on personal computers, or machines that have their own resources. This new computing model was the embryo of modern cyberspace with all the services that we know today.
 
IBM PC 5150

The Evolution of Computing: The Mainframe Era

Modern datacenters have their origins in the huge computer rooms of the early computing industry.  Old computers required an enormous amount of power and had to constantly be cooled to avoid overheating. In addition, security was of great importance because computers were extremely expensive and commonly used for military purposes, so basic guidelines for controlling access to computer rooms were devised.

IBM 704 (1954)

The Evolution of Computing: Overview

In our time, cyberspace is an integral part of the lives of many millions of citizens around the world that dive in it for work or just for fun. Our daily life is now occupied by a plethora of user-friendly technology that allow us to have more time for other activities, increase our productivity and have a lot more access to all kinds of information. But it was not always so, and until we reach this stage we went through about 50 years of development. This series of articles will summarize the evolution of different computing models that underpin much of modern life and discuss some of the future trends that will certainly change the way we relate to information technology and interact with each other.

In recent decades, computer technology has undergone a revolution that catapulted us to a growing complexity of effects revealed in a new society and, from a certain point, we started to take for granted the use of all the technology at our disposal, without thinking about the future consequences of our actions. Therefore, amongst all that we take today for granted, cyberspace is near the top of the list. The promise of the Internet for the twenty-first century is to provide everything everywhere, anytime and anywhere. All human achievements, all culture, all the news will be within reach with just one simple mouse click. The history of computers and cyberspace is critical to understanding the contemporary communication and although they do not constitute the only element of communication in the second half of the twentieth century, they must, by virtue of its importance, come first in any credible historical analysis since they were handed a huge set of tasks that go well beyond the realm of communication.

For many internet users, the access to this virtual world is a sure thing but for many others it does not even exist. Despite its exponential growth and its geographical dispersion, the physical distribution of communications networks is still far from being uniform in all regions of the planet. Moreover, the widespread of mobile telecommunications gives cyberspace a character of uniformity which permits an almost complete abstraction of its physical support. The last few years have been a truly explosive growth phase in information technology, particularly the Internet. Following this expansion, the term cyberspace has become commonly used to describe a virtual world that Internet users inhabit when they are online, accessing the most diverse content, playing games or using widely varying interactive services that the Internet provides. But it is crucial to distinguish cyberspace from telematics networks, because there is a widespread conceptual confusion.

Telematics produces distance communication via computer and telecommunications, while cyberspace is a virtual environment that relies on these media to establish virtual relationships. Thus, I believe the Internet, while being the main global telematics network, does not represent the entire cyberspace because this is something larger that can arise from man's relationship with other technologies such as GPS, biometric sensors and surveillance cameras. In reality, cyberspace can be seen as a new dimension of society where social relationships networks are redefined through new flows of information.

We can visit a distant museum in the comfort of our home, or access any news of a newspaper published thousands of miles away, with a simple mouse click on our computer. Thus, it becomes necessary to think about a regulation of this area in the sake of the common good of the planet. The economy of cyberspace has no mechanism of self-regulation that limits its growth so the current key issues for business are getting cheap energy and keep the transmission times in milliseconds. Revenues from services like Facebook and YouTube are not derived from costs to users so, from the user's point of view, cyberspace is free and infinite. As long as people don't feel any cost in cyberspace usage, they will continue to use it without any restrictions and this is will some become unbearable.

Therefore, the purpose of these articles is to present a brief analysis of the rise and transformations through which these machines and associated technologies have undergone in recent decades, directly affecting the lives of human beings and their work and communication processes.

Next Post

How to use the Virtualization Lab (II)


Picking up from where I left, it was now time to change the setup into something very different. The first step was the creation of another VM inside Hyper-V to be used as an alternative source for iSCSI storage. I achieved this by installing the Microsoft iSCSI Target 3.3 on a new Server 2008 R2 x64 VM. I created this machine with two vhd files; one for the OS and the other one for the iSCSI storage.

I will now show you the steps taken to create three new iSCSI virtual disks:

Creation of the iSCSI target:

iSCSI 1

How to use the Virtualization Lab (I)


I finished last post on this series with a fully working cluster installed between two Hyper-V virtual machines (VM) using a virtual iSCSI solution installed on a Virtual Box VM as depicted in the next picture:

Virtualization Lab 1 
Before moving on in the process of adding complexity to the lab scenario, don't forget to safeguard your work; although this just a lab, it doesn't reduce the nuisance of having to reinstall everything in the event of any failure. So, create VM snapshots:

High Availability with Failover Clusters

Before moving on to the next chapter on my virtualization lab series, I think this might be a good opportunity to review some of the clustering options available today. I will use Windows Server Failover Clustering with Hyper-V because in today's world the trend is to combine Virtualization with High Availability (HA).

There are many ways to implement these solutions and the basic design concepts presented here can be adjusted to other virtualization platforms. Some of them will actually not guarantee a fault-tolerant solution, but most of them can be used in specific scenarios (even if only for demonstration purposes).

Two virtual machines on one physical server


In this scenario an HA cluster is built between two (or more) virtual machines on a single physical machine. Here we have a single physical server running Hyper-V and two child partitions where you run Failover Clustering. This setup does not protect against hardware failures because when the physical server fails, both (virtual) cluster nodes will fail. Therefore, the physical machine itself is a single point of failure (SPOF).

Two virtual machines on one physical server
(Click to enlarge)

How to Setup a Virtualization Lab (III)

Failover Cluster Networking



The first step in the setup of a failover cluster is the creation of an AD domain because all the cluster nodes have to belong to the same domain. But before doing so, I changed the networks settings again in order to adjust them for this purpose.

LAB-DC:IP: 192.168.1.10
Gateway: 192.168.1.1 (Physical Router)
DNS: 127.0.0.1
Alternate DNS: 192.168.1.1

LAB-NODE1:
IP: 192.168.1.11
Gateway: 192.168.1.1
DNS: 192.168.1.10 (DC)
Alternate DNS: 192.168.1.1 (Physical Router)

LAB-NODE2:IP: 192.168.1.12
Gateway: 192.168.1.1
DNS: 192.168.1.10
Alternate DNS: 192.168.1.1

LAB-NODE3:
IP: 192.168.1.13
Gateway: 192.168.1.1
DNS: 192.168.1.10
Alternate DNS: 192.168.1.1

LAB-STORAGE:IP: 192.168.1.14
Gateway: 192.168.1.1
DNS: 192.168.1.10
Alternate DNS: 192.168.1.1

Therefore, I created a domain comprised of 5 machines; a DC and two member servers as Hyper-V VMs, a member server as a VMware VM and another member server as a VirtualBox VM.

So far I have demonstrated the possibility of integrating in the same logical infrastructure virtualized servers running on different platforms using different virtualization techniques; in this case we have VMs running in a Type 1 hypervisor (Hyper-V) and in two distinct Type 2 hypervisors (VMware Workstation and VirtualBox).

The option to create a network with static IP addresses is as valid as the alternative of using DHCP. Later on I plan to explore the several options provided by the cluster networking in Windows 2008 but for the time being I kept my network in a simple and basic configuration in order to proceed with the lab installation.

How to Setup a Virtualization Lab (II)


As mentioned at the end of my previous article, the installation of my lab continued with the creation of virtual machines on the desktop computer. But this time I used VMware and VirtualBox to explore the possibility of using a set of virtualized servers across different and competing virtualization technologies.

I insisted on the network configuration details because that is the basis of all the work ahead; a single virtual machine may be important but I want to show how they can work together and therefore the correct network configuration of paramount importance.

Import a Virtual Machine into VMware


I started by installing a VM on VMware Workstation. Better yet, I took advantage of what was previously done and used the generalized .vhd file I left behind! Since VMware does not directly support the use of .vhd files, I had to convert the file from the format used by Hyper-V (Virtual Hard Disk, i.e., .vhd) to the format used by VMware (Virtual Machine Disk, i.e., .vmdk).

The VMware vCenter Converter Standalone utility is a free application which can be obtained directly from VMware’s official site but doesn’t solve the problem as it doesn’t support this type of conversion, although it can convert from other formats and even directly from servers running Hyper-V. But what interested me was to use the work already done and so I resorted to the WinImage tool.

The process was very simple:

I selected the appropriate option from the Disk menu and select the proper source file;

WinImage


How to Setup a Virtualization Lab (I)


Now that I have concluded a general overview of most of the theory related to High Availability and Virtualization it is time to start testing some of those concepts and see them in action.

My goal for the next posts is to produce a series of tutorials showing how anyone can easily install a handful of virtual machines and be able to explore the wonderful possibilities provided by this technology. I will be using an old laptop powered by a Turion 64 X2 CPU with a 250 Gb SSD HD and 4 Gb of RAM combined with a desktop running Windows 7 Ultimate on a Athlon 64 X2 4800+ with 4 Gb of RAM and lots a free disk space scattered through 3 SATA hard drives.

Virtual Machines Creation


I will not go through the details of OS installation because I am assuming the ones reading these tutorials are way passed that.

I started by installing a fresh copy of Windows Server 2008 R2 SP1 Standard on a secondary partition in my laptop.  Once I was done with the installation of all the available updates from Windows Update and with OS activation, I was ready to add the Hyper-V role in order to be able to install the virtual machines. To do this I just went into Server Manager/Roles, started the Add Roles Wizard, selected Hyper-V and followed the procedures. Nothing special so far, right?

Hyper-V Role

Note: All the pictures are clickable and will open a larger version in a separate window.