Security Identifier (SID)
A security identifier (SID) is a unique value of variable length that is used to identify a security principal or security group in Windows operating systems. Well-known SIDs are a group of SIDs that identify generic users or generic groups. Their values remain constant across all operating systems.
This information is useful for troubleshooting issues involving security. It is also useful for potential display problems that may be seen in the ACL editor. A SID may be displayed in the ACL editor instead of the user or group name.
Well-known SIDs:
• SID: S-1-0
Name: Null Authority
Description: An identifier authority.
• SID: S-1-0-0
Name: Nobody
Description: No security principal.
• SID: S-1-1
Name: World Authority
Description: An identifier authority.
• SID: S-1-1-0
Name: Everyone
Description: A group that includes all users, even anonymous users and guests. Membership is controlled by the operating system.
Note By default, the Everyone group no longer includes anonymous users on a computer that is running Windows XP Service Pack 2 (SP2).
• SID: S-1-2
Name: Local Authority
Description: An identifier authority.
• SID: S-1-2-0
Name: Local
Description: A group that includes all users who have logged on locally.
• SID: S-1-2-1
Name: Console Logon
Description: A group that includes users who are logged on to the physical console.
Note Added in Windows 7 and Windows Server 2008 R2
• SID: S-1-3
Name: Creator Authority
Description: An identifier authority.
• SID: S-1-3-0
Name: Creator Owner
Description: A placeholder in an inheritable access control entry (ACE). When the ACE is inherited, the system replaces this SID with the SID for the object's creator.
• SID: S-1-3-1
Name: Creator Group
Description: A placeholder in an inheritable ACE. When the ACE is inherited, the system replaces this SID with the SID for the primary group of the object's creator. The primary group is used only by the POSIX subsystem.
• SID: S-1-3-2
Name: Creator Owner Server
Description: This SID is not used in Windows 2000.
• SID: S-1-3-3
Name: Creator Group Server
Description: This SID is not used in Windows 2000.
• SID: S-1-3-4 Name: Owner Rights
Description: A group that represents the current owner of the object. When an ACE that carries this SID is applied to an object, the system ignores the implicit READ_CONTROL and WRITE_DAC permissions for the object owner.
Note Added in Windows Vista and Windows Server 2008
• SID: S-1-4
Name: Non-unique Authority
Description: An identifier authority.
• SID: S-1-5
Name: NT Authority
Description: An identifier authority.
• SID: S-1-5-1
Name: Dialup
Description: A group that includes all users who have logged on through a dial-up connection. Membership is controlled by the operating system.
• SID: S-1-5-2
Name: Network
Description: A group that includes all users that have logged on through a network connection. Membership is controlled by the operating system.
• SID: S-1-5-3
Name: Batch
Description: A group that includes all users that have logged on through a batch queue facility. Membership is controlled by the operating system.
• SID: S-1-5-4
Name: Interactive
Description: A group that includes all users that have logged on interactively. Membership is controlled by the operating system.
• SID: S-1-5-5-X-Y
Name: Logon Session
Description: A logon session. The X and Y values for these SIDs are different for each session.
• SID: S-1-5-6
Name: Service
Description: A group that includes all security principals that have logged on as a service. Membership is controlled by the operating system.
• SID: S-1-5-7
Name: Anonymous
Description: A group that includes all users that have logged on anonymously. Membership is controlled by the operating system.
• SID: S-1-5-8
Name: Proxy
Description: This SID is not used in Windows 2000.
• SID: S-1-5-9
Name: Enterprise Domain Controllers
Description: A group that includes all domain controllers in a forest that uses an Active Directory directory service. Membership is controlled by the operating system.
• SID: S-1-5-10
Name: Principal Self
Description: A placeholder in an inheritable ACE on an account object or group object in Active Directory. When the ACE is inherited, the system replaces this SID with the SID for the security principal who holds the account.
• SID: S-1-5-11
Name: Authenticated Users
Description: A group that includes all users whose identities were authenticated when they logged on. Membership is controlled by the operating system.
• SID: S-1-5-12
Name: Restricted Code
Description: This SID is reserved for future use.
• SID: S-1-5-13
Name: Terminal Server Users
Description: A group that includes all users that have logged on to a Terminal Services server. Membership is controlled by the operating system.
• SID: S-1-5-14
Name: Remote Interactive Logon
Description: A group that includes all users who have logged on through a terminal services logon.
• SID: S-1-5-15
Name: This Organization
Description: A group that includes all users from the same organization. Only included with AD accounts and only added by a Windows Server 2003 or later domain controller.
• SID: S-1-5-17
Name: This Organization
Description: An account that is used by the default Internet Information Services (IIS) user.
• SID: S-1-5-18
Name: Local System
Description: A service account that is used by the operating system.
• SID: S-1-5-19
Name: NT Authority
Description: Local Service
• SID: S-1-5-20
Name: NT Authority
Description: Network Service
• SID: S-1-5-21domain-500
Name: Administrator
Description: A user account for the system administrator. By default, it is the only user account that is given full control over the system.
• SID: S-1-5-21domain-501
Name: Guest
Description: A user account for people who do not have individual accounts. This user account does not require a password. By default, the Guest account is disabled.
• SID: S-1-5-21domain-502
Name: KRBTGT
Description: A service account that is used by the Key Distribution Center (KDC) service.
• SID: S-1-5-21domain-512
Name: Domain Admins
Description: A global group whose members are authorized to administer the domain. By default, the Domain Admins group is a member of the Administrators group on all computers that have joined a domain, including the domain controllers. Domain Admins is the default owner of any object that is created by any member of the group.
• SID: S-1-5-21domain-513
Name: Domain Users
Description: A global group that, by default, includes all user accounts in a domain. When you create a user account in a domain, it is added to this group by default.
• SID: S-1-5-21domain-514
Name: Domain Guests
Description: A global group that, by default, has only one member, the domain's built-in Guest account.
• SID: S-1-5-21domain-515
Name: Domain Computers
Description: A global group that includes all clients and servers that have joined the domain.
• SID: S-1-5-21domain-516
Name: Domain Controllers
Description: A global group that includes all domain controllers in the domain. New domain controllers are added to this group by default.
• SID: S-1-5-21domain-517
Name: Cert Publishers
Description: A global group that includes all computers that are running an enterprise certification authority. Cert Publishers are authorized to publish certificates for User objects in Active Directory.
• SID: S-1-5-21root domain-518
Name: Schema Admins
Description: A universal group in a native-mode domain; a global group in a mixed-mode domain. The group is authorized to make schema changes in Active Directory. By default, the only member of the group is the Administrator account for the forest root domain.
• SID: S-1-5-21root domain-519
Name: Enterprise Admins
Description: A universal group in a native-mode domain; a global group in a mixed-mode domain. The group is authorized to make forest-wide changes in Active Directory, such as adding child domains. By default, the only member of the group is the Administrator account for the forest root domain.
• SID: S-1-5-21domain-520
Name: Group Policy Creator Owners
Description: A global group that is authorized to create new Group Policy objects in Active Directory. By default, the only member of the group is Administrator.
• SID: S-1-5-21domain-553
Name: RAS and IAS Servers
Description: A domain local group. By default, this group has no members. Servers in this group have Read Account Restrictions and Read Logon Information access to User objects in the Active Directory domain local group.
• SID: S-1-5-32-544
Name: Administrators
Description: A built-in group. After the initial installation of the operating system, the only member of the group is the Administrator account. When a computer joins a domain, the Domain Admins group is added to the Administrators group. When a server becomes a domain controller, the Enterprise Admins group also is added to the Administrators group.
• SID: S-1-5-32-545
Name: Users
Description: A built-in group. After the initial installation of the operating system, the only member is the Authenticated Users group. When a computer joins a domain, the Domain Users group is added to the Users group on the computer.
• SID: S-1-5-32-546
Name: Guests
Description: A built-in group. By default, the only member is the Guest account. The Guests group allows occasional or one-time users to log on with limited privileges to a computer's built-in Guest account.
• SID: S-1-5-32-547
Name: Power Users
Description: A built-in group. By default, the group has no members. Power users can create local users and groups; modify and delete accounts that they have created; and remove users from the Power Users, Users, and Guests groups. Power users also can install programs; create, manage, and delete local printers; and create and delete file shares.
• SID: S-1-5-32-548
Name: Account Operators
Description: A built-in group that exists only on domain controllers. By default, the group has no members. By default, Account Operators have permission to create, modify, and delete accounts for users, groups, and computers in all containers and organizational units of Active Directory except the Builtin container and the Domain Controllers OU. Account Operators do not have permission to modify the Administrators and Domain Admins groups, nor do they have permission to modify the accounts for members of those groups.
• SID: S-1-5-32-549
Name: Server Operators
Description: A built-in group that exists only on domain controllers. By default, the group has no members. Server Operators can log on to a server interactively; create and delete network shares; start and stop services; back up and restore files; format the hard disk of the computer; and shut down the computer.
• SID: S-1-5-32-550
Name: Print Operators
Description: A built-in group that exists only on domain controllers. By default, the only member is the Domain Users group. Print Operators can manage printers and document queues.
• SID: S-1-5-32-551
Name: Backup Operators
Description: A built-in group. By default, the group has no members. Backup Operators can back up and restore all files on a computer, regardless of the permissions that protect those files. Backup Operators also can log on to the computer and shut it down.
• SID: S-1-5-32-552
Name: Replicators
Description: A built-in group that is used by the File Replication service on domain controllers. By default, the group has no members. Do not add users to this group.
• SID: S-1-5-64-10
Name: NTLM Authentication
Description: A SID that is used when the NTLM authentication package authenticated the client
• SID: S-1-5-64-14
Name: SChannel Authentication
Description: A SID that is used when the SChannel authentication package authenticated the client.
• SID: S-1-5-64-21
Name: Digest Authentication
Description: A SID that is used when the Digest authentication package authenticated the client.
• SID: S-1-5-80
Name: NT Service
Description: An NT Service account prefix
• SID: S-1-16-0
Name: Untrusted Mandatory Level
Description: An untrusted integrity level. Note Added in Windows Vista and Windows Server 2008
Note Added in Windows Vista and Windows Server 2008
• SID: S-1-16-4096
Name: Low Mandatory Level
Description: A low integrity level.
Note Added in Windows Vista and Windows Server 2008
• SID: S-1-16-8192
Name: Medium Mandatory Level
Description: A medium integrity level.
Note Added in Windows Vista and Windows Server 2008
• SID: S-1-16-8448
Name: Medium Plus Mandatory Level
Description: A medium plus integrity level.
Note Added in Windows Vista and Windows Server 2008
• SID: S-1-16-12288
Name: High Mandatory Level
Description: A high integrity level.
Note Added in Windows Vista and Windows Server 2008
• SID: S-1-16-16384
Name: System Mandatory Level
Description: A system integrity level.
Note Added in Windows Vista and Windows Server 2008
• SID: S-1-16-20480
Name: Protected Process Mandatory Level
Description: A protected-process integrity level.
Note Added in Windows Vista and Windows Server 2008
• SID: S-1-16-28672
Name: Secure Process Mandatory Level
Description: A secure process integrity level.
Note Added in Windows Vista and Windows Server 2008
The following groups will show as SIDs until a Windows Server 2003 domain controller is made the primary domain controller (PDC) operations master role holder. (The "operations master" is also known as flexible single master operations or FSMO.) Additional new built-in groups that are created when a Windows Server 2003 domain controller is added to the domain are:
• SID: S-1-5-32-554
Name: BUILTIN\Pre-Windows 2000 Compatible Access
Description: An alias added by Windows 2000. A backward compatibility group which allows read access on all users and groups in the domain.
• SID: S-1-5-32-555
Name: BUILTIN\Remote Desktop Users
Description: An alias. Members in this group are granted the right to logon remotely.
• SID: S-1-5-32-556
Name: BUILTIN\Network Configuration Operators
Description: An alias. Members in this group can have some administrative privileges to manage configuration of networking features.
• SID: S-1-5-32-557
Name: BUILTIN\Incoming Forest Trust Builders
Description: An alias. Members of this group can create incoming, one-way trusts to this forest.
• SID: S-1-5-32-558
Name: BUILTIN\Performance Monitor Users
Description: An alias. Members of this group have remote access to monitor this computer.
• SID: S-1-5-32-559
Name: BUILTIN\Performance Log Users
Description: An alias. Members of this group have remote access to schedule logging of performance counters on this computer.
• SID: S-1-5-32-560
Name: BUILTIN\Windows Authorization Access Group
Description: An alias. Members of this group have access to the computed tokenGroupsGlobalAndUniversal attribute on User objects.
• SID: S-1-5-32-561
Name: BUILTIN\Terminal Server License Servers
Description: An alias. A group for Terminal Server License Servers. When Windows Server 2003 Service Pack 1 is installed, a new local group is created.
• SID: S-1-5-32-562
Name: BUILTIN\Distributed COM Users
Description: An alias. A group for COM to provide computerwide access controls that govern access to all call, activation, or launch requests on the computer.
The following groups will show as SIDs until a Windows Server 2008 or Windows Server 2008 R2 domain controller is made the primary domain controller (PDC) operations master role holder. (The "operations master" is also known as flexible single master operations or FSMO.) Additional new built-in groups that are created when a Windows Server 2008 or Windows Server 2008 R2 domain controller is added to the domain are:
• SID: S-1-5- 21domain -498
Name: Enterprise Read-only Domain Controllers
Description: A Universal group. Members of this group are Read-Only Domain Controllers in the enterprise
• SID: S-1-5- 21domain -521
Name: Read-only Domain Controllers
Description: A Global group. Members of this group are Read-Only Domain Controllers in the domain
• SID: S-1-5-32-569
Name: BUILTIN\Cryptographic Operators
Description: A Builtin Local group. Members are authorized to perform cryptographic operations.
• SID: S-1-5-21 domain -571
Name: Allowed RODC Password Replication Group
Description: A Domain Local group. Members in this group can have their passwords replicated to all read-only domain controllers in the domain.
• SID: S-1-5- 21 domain -572
Name: Denied RODC Password Replication Group
Description: A Domain Local group. Members in this group cannot have their passwords replicated to any read-only domain controllers in the domain
• SID: S-1-5-32-573
Name: BUILTIN\Event Log Readers
Description: A Builtin Local group. Members of this group can read event logs from local machine.
• SID: S-1-5-32-574
Name: BUILTIN\Certificate Service DCOM Access
Description: A Builtin Local group. Members of this group are allowed to connect to Certification Authorities in the enterprise.
Description of RID Attributes in Active Directory
This article describes RID-related attributes in Active Directory.
Users, computers, and groups (collectively known as "security principals") that are stored in Active Directory are assigned Security Identifiers (SIDS), which are unique alphanumeric numeric strings that map to a single object in the domain. SIDS consist of a domain-wide SID concatenated with a monotonically-increasing relative identifier (RID) that is allocated by each Windows 2000 domain controller in the domain. Each Windows 2000 domain controller is assigned a pool of RIDs by the RID flexible single-master operations (FSMO) owner in each Active Directory domain. The RID FSMO is responsible for issuing a unique RID pool to each domain controller in its domain.
RID Attributes in Active Directory
• FsmoRoleOwner
DN path: CN=RID Manager$,CN=System,DC=domain,DC=com
Points to Domain Name path of the current RID masters NTDS Settings object according to domain controller that is being queried.
• RidAvailablePool
DN path: CN=RID Manager$,CN=System,DC=domain,DC=com
Global RID space for an entire domain is defined in Ridmgr.h. as a large integer with upper and lower parts. The upper part defines the number of security principals that can be allocated per domain (0x3FFFFFFF or just over 1 billion). The lower part is the number of RIDs that have been allocated in the domain. To view both parts, use the Large Integer Converter command in the Utilities menu in Ldp.exe.
o Sample Value: 4611686014132422708 (Insert in Large Integer Calculator in the Utilities menu of Ldp.exe)
o Low Part: 2100 (Beginning of next RID pool to be allocated)
o High Part: 1073741823 (Total number of RIDS that can be created in a domain)
• RidAllocationPool
DN Path: CN=Rid Set,Cn=computername,ou=domain controllers,DC=domain,DC=COM
Each domain controller has two pools: the one that they are currently acting on, and the pool that they will use next. It is the next pool, which is allocated by the RID FSMO, that will be used for creation of security principals in the domain when the current pool is exhausted. Use the Large Integer Converter command in the Utilities menu in Ldp.exe to view both pools.
o Sample Value: 685485370535295 (Insert in Large Integer Calculator in Utilities menu of Ldp.exe)
o Low Part: 159103 (Beginning RID in the next RID pool)
o High Part: 159602 (Ending RID in the next RID pool)
• RidNextRid
DN Path: CN=Rid Set,Cn=computername,ou=domain controllers,DC=domain,DC=COM
The RID that was assigned to the last security principal that was created on the local domain controller. RidNextRid is a non-replicated value in Active Directory.
o Sample Value: 159345 (RID assigned to the last created security principal from the RidPreviousAllocationPool)
• RidPreviousAllocationPool
DN Path: CN=Rid Set,Cn=computername,ou=domain controllers,DC=domain,DC=COM
The pool from which RIDs are currently taken. The value for RidNextRid is implicitly a member of this pool. Use the Large Integer Converter command in the Utilities menu in Ldp.exe to view the beginning and ending RIDS in the current pool. RidPreviousAllocationPools is a non-replicated value in Active Directory.
o Sample Value: 687632854183795 (Insert in Large Integer Converter command in the Utilities menu of Ldp.exe)
o Low Part: 159,603 (Beginning RID in next RID pool)
o High Part: 160,102 (Ending RID in next RID pool)
• RidUsedPool
DN Path: CN=Rid Set,Cn=computername,ou=domain controllers,DC=domain,DC=COM
Unused attribute
• NextRid
DN Path: DC=domain,DC=COM
Windows Server Family
Sunday, July 3, 2011
Monday, May 10, 2010
Standard RAID levels
RAID 0
Diagram of a RAID 0 setup.
A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped) with no parity information for redundancy. It is important to note that RAID 0 was not one of the original RAID levels and provides no data redundancy. RAID 0 is normally used to increase performance, although it can also be used as a way to create a small number of large virtual disks out of a large number of small physical ones.
A RAID 0 can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if a 120 GB disk is striped together with a 100 GB disk, the size of the array will be 200 GB.
\begin{align} \mathrm{Size} & = 2 \cdot \min \left( 120\,\mathrm{GB}, 100\,\mathrm{GB} \right) \\ & = 2 \cdot 100\,\mathrm{GB} \\ & = 200\,\mathrm{GB} \end{align}
[edit] RAID 0 failure rate
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2010)
Although RAID 0 was not specified in the original RAID paper, an idealized implementation of RAID 0 would split I/O operations into equal-sized blocks and spread them evenly across two disks. RAID 0 implementations with more than two disks are also possible, though the group reliability decreases with member size.
Reliability of a given RAID 0 set is equal to the average reliability of each disk divided by the number of disks in the set:
\mathrm{MTTF}_{\mathrm{group}} \approx \frac{\mathrm{MTTF}_{\mathrm{disk}}}{\mathrm{number}}
That is, reliability (as measured by mean time to failure (MTTF) or mean time between failures (MTBF) is roughly inversely proportional to the number of members – so a set of two disks is roughly half as reliable as a single disk. If there were a probability of 5% that the disk would fail within three years, in a two disk array, that probability would be upped to \mathbb{P}(\mbox{at least one fails}) = 1 - \mathbb{P}(\mbox{neither fails}) = 1 - (1 - 0.05)^2 = 0.0975 = 9.75\,\%.
The reason for this is that the file system is distributed across all disks. When a drive fails the file system cannot cope with such a large loss of data and coherency since the data is "striped" across all drives (the data cannot be recovered without the missing disk). Data can be recovered using special tools, however, this data will be incomplete and most likely corrupt, and data recovery is typically very costly and not guaranteed.
[edit] RAID 0 performance
While the block size can technically be as small as a byte, it is almost always a multiple of the hard disk sector size of 512 bytes. This lets each drive seek independently when randomly reading or writing data on the disk. How much the drives act independently depends on the access pattern from the file system level. For reads and writes that are larger than the stripe size, such as copying files or video playback, the disks will be seeking to the same position on each disk, so the seek time of the array will be the same as that of a single drive. For reads and writes that are smaller than the stripe size, such as database access, the drives will be able to seek independently. If the sectors accessed are spread evenly between the two drives, the apparent seek time of the array will be half that of a single drive (assuming the disks in the array have identical access time characteristics). The transfer speed of the array will be the transfer speed of all the disks added together, limited only by the speed of the RAID controller. Note that these performance scenarios are in the best case with optimal access patterns.
RAID 0 is useful for setups such as large read-only NFS server where mounting many disks is time-consuming or impossible and redundancy is irrelevant.
RAID 0 is also used in some gaming systems where performance is desired and data integrity is not very important. However, real-world tests with games have shown that RAID-0 performance gains are minimal, although some desktop applications will benefit.[1][2] Another article examined these claims and concludes: "Striping does not always increase performance (in certain situations it will actually be slower than a non-RAID setup), but in most situations it will yield a significant improvement in performance." [3]
[edit] RAID 1
Diagram of a RAID 1 setup
A RAID 1 creates an exact copy (or mirror) of a set of data on two or more disks. This is useful when read performance or reliability are more important than data storage capacity. Such an array can only be as big as the smallest member disk. A classic RAID 1 mirrored pair contains two disks (see diagram), which increases reliability geometrically over a single disk. Since each member contains a complete copy of the data, and can be addressed independently, ordinary wear-and-tear reliability is raised by the power of the number of self-contained copies.
[edit] RAID 1 failure rate
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2010)
As a trivial example, consider a RAID 1 with two identical models of a disk drive with a 5% probability that the disk would fail within three years. Provided that the failures are statistically independent, then the probability of both disks failing during the three year lifetime is
P(\mathrm{bothfail}) = \left(0.05\right)^2 = 0.0025 = 0.25\,\%.
Thus, the probability of losing all data is 0.25% if the first failed disk is never replaced. If only one of the disks fails, no data would be lost, assuming the failed disk is replaced before the second disk fails.
However, since two identical disks are used and since their usage patterns are also identical, their failures cannot be assumed to be independent. Thus, the probability of losing all data, if the first failed disk is not replaced, is considerably higher than 0.25% but still below 5%.
As a practical matter, in a well managed system the above is irrelevant because the failed hard drive will not be ignored. It will be replaced. The reliability of the overall system is determined by the probability the remaining drive will continue to operate through the repair period, that is the total time it takes to detect a failure, replace the failed hard drive, and for that drive to be rebuilt. If, for example, it takes one hour to replace the failed drive, the overall system reliability is defined by the probability the remaining drive will operate for one hour without failure.
It is worth noting that while RAID 1 can be an effective protection against physical disk failure, it does not provide protection against data corruption due to viruses, accidental file changes or deletions, or any other data-specific changes. By design, any such changes will be instantly mirrored to every drive in the array segment. A virus, for example, that damages data on one drive in a RAID 1 array will damage the same data on all other drives in the array at the same time. For this reason systems using RAID 1 to protect against physical drive failure should also have a traditional data backup process in place to allow data restoration to previous points in time. It would seem self-evident that any system critical enough to need the protection of disk redundancy is also a system critical enough to need the protection of reliable data backups.
[edit] RAID 1 performance
Since all the data exists in two or more copies, each with its own hardware, the read performance can go up roughly as a linear multiple of the number of copies. That is, a RAID 1 array of two drives can be reading in two different places at the same time, though not all implementations of RAID 1 do this.[4] To maximize performance benefits of RAID 1, independent disk controllers are recommended, one for each disk. Some refer to this practice as splitting or duplexing. When reading, both disks can be accessed independently and requested sectors can be split evenly between the disks. For the usual mirror of two disks, this would, in theory, double the transfer rate when reading. The apparent access time of the array would be half that of a single drive. Unlike RAID 0, this would be for all access patterns, as all the data are present on all the disks. In reality, the need to move the drive heads to the next block (to skip blocks already read by the other drives) can effectively mitigate speed advantages for sequential access. Read performance can be further improved by adding drives to the mirror. Many older IDE RAID 1 controllers read only from one disk in the pair, so their read performance is always that of a single disk. Some older RAID 1 implementations would also read both disks simultaneously and compare the data to detect errors. The error detection and correction on modern disks makes this less useful in environments requiring normal availability. When writing, the array performs like a single disk, as all mirrors must be written with the data. Note that these performance scenarios are in the best case with optimal access patterns.
RAID 1 has many administrative advantages. For instance, in some environments, it is possible to "split the mirror": declare one disk as inactive, do a backup of that disk, and then "rebuild" the mirror. This is useful in situations where the file system must be constantly available. This requires that the application supports recovery from the image of data on the disk at the point of the mirror split. This procedure is less critical in the presence of the "snapshot" feature of some file systems, in which some space is reserved for changes, presenting a static point-in-time view of the file system. Alternatively, a new disk can be substituted so that the inactive disk can be kept in much the same way as traditional backup. To keep redundancy during the backup process, some controllers support adding a third disk to an active pair. After a rebuild to the third disk completes, it is made inactive and backed up as described above.
[edit] RAID 2
A RAID 2 stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. The disks are synchronized by the controller to spin in perfect tandem. Extremely high data transfer rates are possible. This is the only original level of RAID that is not currently used.
The use of the Hamming(7,4) code (four data bits plus three parity bits) also permits using 7 disks in RAID 2, with 4 being used for data storage and 3 being used for error correction.
RAID 2 is the only standard RAID level, other than some implementations of RAID 6, which can automatically recover accurate data from single-bit corruption in data. Other RAID levels can detect single-bit corruption in data, or can sometimes reconstruct missing data, but cannot reliably resolve contradictions between parity bits and data bits without human intervention.
(Multiple-bit corruption is possible though extremely rare. RAID 2 can detect but not repair double-bit corruption.)
All hard disks soon after implemented an error correction code that also used Hamming code, so RAID 2's error corrrection was now redundant and added unnecessary complexity. Like RAID 3, this level quickly became useless and it is now obsolete. There are no commercial applications of RAID 2.[5][6]
[edit] RAID 3
Diagram of a RAID 3 setup of 6-byte blocks and two parity bytes, shown are two blocks of data in different colors.
A RAID 3 uses byte-level striping with a dedicated parity disk. RAID 3 is very rare in practice. One of the side effects of RAID 3 is that it generally cannot service multiple requests simultaneously. This comes about because any single block of data will, by definition, be spread across all members of the set and will reside in the same location. So, any I/O operation requires activity on every disk and usually requires synchronized spindles.
In our example, a request for block "A" consisting of bytes A1-A6 would require all three data disks to seek to the beginning (A1) and reply with their contents. A simultaneous request for block B would have to wait.
However, the performance characteristic of RAID 3 is very consistent, unlike higher RAID levels,[clarification needed] the size of a stripe is less than the size of a sector or OS block so that, for both reading and writing, the entire stripe is accessed every time. The performance of the array is therefore identical to the performance of one disk in the array except for the transfer rate, which is multiplied by the number of data drives (i.e., less parity drives).
This makes it best for applications that demand the highest transfer rates in long sequential reads and writes, for example uncompressed video editing. Applications that make small reads and writes from random places over the disk will get the worst performance out of this level.[6]
The requirement that all disks spin synchronously, aka in lockstep, added design considerations to a level that didn't give significant advantages over other RAID levels, so it quickly became useless and is now obsolete.[5] Both RAID 3 and RAID 4 were quickly replaced by RAID 5.[7] However, this level has commercial vendors making implementations of it. It's usually implemented in hardware, and the performance issues are addressed by using large disks.[6]
[edit] RAID 4
Diagram of a RAID 4 setup with dedicated parity disk with each color representing the group of blocks in the respective parity block (a stripe)
A RAID 4 uses block-level striping with a dedicated parity disk. This allows each member of the set to act independently when only a single block is requested. If the disk controller allows it, a RAID 4 set can service multiple read requests simultaneously. RAID 4 looks similar to RAID 5 except that it does not use distributed parity, and similar to RAID 3 except that it stripes at the block level, rather than the byte level. Generally, RAID 4 is implemented with hardware support for parity calculations, and a minimum of 3 disks is required for a complete RAID 4 configuration.
In the example on the right, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.
For writing the parity disk becomes a bottleneck, as simultaneous writes to A1 and B2 would in addition to the writes to their respective drives also both need to write to the parity drive. In this way RAID example 4 places a very high load on the parity drive in an array.
The performance of RAID 4 in this configuration can be very poor, but unlike RAID 3 it does not need synchronized spindles. However, if RAID 4 is implemented on synchronized drives and the size of a stripe is reduced below the OS block size a RAID 4 array then has the same performance pattern as a RAID 3 array.
Currently, RAID 4 is only implemented at the enterprise level by one single company, NetApp, who solved the performance problems discussed above with their proprietary WAFL filesystem.[citation needed]
Both RAID 3 and RAID 4 were quickly replaced by RAID 5.[7]
[edit] RAID 5
Diagram of a RAID 5 setup with distributed parity with each color representing the group of blocks in the respective parity block (a stripe). This diagram shows left asymmetric algorithm
A RAID 5 uses block-level striping with parity data distributed across all member disks. RAID 5 has achieved popularity because of its low cost of redundancy. This can be seen by comparing the number of drives needed to achieve a given capacity. RAID 1 or RAID 1+0, which yield redundancy, give only s / 2 storage capacity, where s is the sum of the capacities of n drives used. In RAID 5, the yield is S_{\mathrm{min}} \times (n - 1) where Smin is the size of the smallest disk in the array. As an example, four 1-TB drives can be made into a 2-TB redundant array under RAID 1 or RAID 1+0, but the same four drives can be used to build a 3-TB array under RAID 5. Although RAID 5 is commonly implemented in a disk controller, some with hardware support for parity calculations (hardware RAID cards) and some using the main system processor (motherboard based RAID controllers), it can also be done at the operating system level, e.g., using Windows Dynamic Disks or with mdadm in Linux. A minimum of three disks is required for a complete RAID 5 configuration. In some implementations a degraded RAID 5 disk set can be made (three disk set of which only two are online), while mdadm supports a fully-functional (non-degraded) RAID 5 setup with two disks - which function as a slow RAID-1, but can be expanded with further volumes.
In the example, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.
[edit] RAID 5 parity handling
A concurrent series of blocks (one on each of the disks in an array) is collectively called a stripe. If another block, or some portion thereof, is written on that same stripe, the parity block, or some portion thereof, is recalculated and rewritten. For small writes, this requires:
* Read the old data block
* Read the old parity block
* Compare the old data block with the write request. For each bit that has flipped (changed from 0 to 1, or from 1 to 0) in the data block, flip the corresponding bit in the parity block
* Write the new data block
* Write the new parity block
The disk used for the parity block is staggered from one stripe to the next, hence the term distributed parity blocks. RAID 5 writes are expensive in terms of disk operations and traffic between the disks and the controller.
The parity blocks are not read on data reads, since this would add unnecessary overhead and would diminish performance. The parity blocks are read, however, when a read of blocks in the stripe and within the parity block in the stripe are used to reconstruct the errant sector. The CRC error is thus hidden from the main computer. Likewise, should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data on the failed drive on-the-fly.
This is sometimes called Interim Data Recovery Mode. The computer knows that a disk drive has failed, but this is only so that the operating system can notify the administrator that a drive needs replacement; applications running on the computer are unaware of the failure. Reading and writing to the drive array continues seamlessly, though with some performance degradation...
[edit] RAID 5 disk failure rate
The maximum number of drives in a RAID 5 redundancy group is theoretically unlimited. The tradeoffs of larger redundancy groups are greater probability of a simultaneous double disk failure, the increased time to rebuild a redundancy group, and the greater probability of encountering an unrecoverable sector during RAID reconstruction. As the number of disks in a RAID 5 group increases, the mean time between failures (MTBF, the reciprocal of the failure rate) can become lower than that of a single disk. This happens when the likelihood of a second disk's failing out of N − 1 dependent disks, within the time it takes to detect, replace and recreate a first failed disk, becomes larger than the likelihood of a single disk's failing.
Solid-state drives (SSDs) may present a revolutionary instead of evolutionary way of dealing with increasing RAID-5 rebuild limitations. With encouragement from many flash-SSD manufacturers, JEDEC is preparing to set standards in 2009 for measuring UBER (uncorrectable bit error rates) and "raw" bit error rates (error rates before ECC, error correction code).[8] But even the economy-class Intel X25-M SSD claims an unrecoverable error rate of 1 sector in 1015 bits and an MTBF of two million hours.[9] Ironically, the much-faster throughput of SSDs (STEC claims its enterprise-class Zeus SSDs exceed 200 times the transactional performance of today's 15k-RPM, enterprise-class HDDs)[10] suggests that a similar error rate (1 in 1015) will result a two-magnitude shortening of MTBF.
In the event of a system failure while there are active writes, the parity of a stripe may become inconsistent with the data. If this is not detected and repaired before a disk or block fails, data loss may ensue as incorrect parity will be used to reconstruct the missing block in that stripe. This potential vulnerability is sometimes known as the write hole. Battery-backed cache and similar techniques are commonly used to reduce the window of opportunity for this to occur. The same issue occurs for RAID-6.
[edit] RAID 5 performance
RAID 5 implementations suffer from poor performance when faced with a workload which includes many writes which are smaller than the capacity of a single stripe. This is because parity must be updated on each write, requiring read-modify-write sequences for both the data block and the parity block. More complex implementations may include a non-volatile write back cache to reduce the performance impact of incremental parity updates.
Random write performance is poor, especially at high concurrency levels common in large multi-user databases. The read-modify-write cycle requirement of RAID 5's parity implementation penalizes random writes by as much as an order of magnitude compared to RAID 0.[11]
Performance problems can be so severe that some database experts have formed a group called BAARF — the Battle Against Any Raid Five.[12]
The read performance of RAID 5 is almost as good as RAID 0 for the same number of disks. Except for the parity blocks, the distribution of data over the drives follows the same pattern as RAID 0. The reason RAID 5 is slightly slower is that the disks must skip over the parity blocks.
[edit] RAID 5 usable size
Parity data uses up the capacity of one drive in the array (this can be seen by comparing it with RAID 4: RAID 5 distributes the parity data across the disks, while RAID 4 centralizes it on one disk, but the amount of parity data is the same). If the drives vary in capacity, the smallest of them sets the limit. Therefore, the usable capacity of a RAID 5 array is (N-1) \cdot S_{\mathrm{min}}, where N is the total number of drives in the array and Smin is the capacity of the smallest drive in the array.
The number of hard disks that can belong to a single array is limited only by the capacity of the storage controller in hardware implementations, or by the OS in software RAID. One caveat is that unlike RAID 1, as the number of disks in an array increases, the chance of data loss due to multiple drive failures is increased. This is because there is a reduced ratio of "losable" drives (the number of drives which may fail before data is lost) to total drives.
[edit] RAID 6
Diagram of a RAID 6 setup, which is identical to RAID 5 other than the addition of a second parity block
[edit] Redundancy and data loss recovery capability
RAID 6 extends RAID 5 by adding an additional parity block; thus it uses block-level striping with two parity blocks distributed across all member disks.
[edit] Performance (speed)
RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer's storage architecture – in software, firmware or by using firmware and specialized ASICs for intensive parity calculations. It can be as fast as a RAID-5 system with one less drive (same number of data drives).[13]
[edit] Efficiency (potential waste of storage)
RAID 6 is no more space inefficient than RAID 5 with a hot spare drive when used with a small number of drives, but as arrays become bigger and have more drives the loss in storage capacity becomes less important and the probability of data loss is greater. RAID 6 provides protection against data loss during an array rebuild, when a second drive is lost, a bad block read is encountered, or when a human operator accidentally removes and replaces the wrong disk drive when attempting to replace a failed drive.
The usable capacity of a RAID 6 array is (N-2) \cdot S_{\mathrm{min}}, where N is the total number of drives in the array and Smin is the capacity of the smallest drive in the array.
[edit] Implementation
According to the Storage Networking Industry Association (SNIA), the definition of RAID 6 is: "Any form of RAID that can continue to execute read and write requests to all of a RAID array's virtual disks in the presence of any two concurrent disk failures. Several methods, including dual check data computations (parity and Reed-Solomon), orthogonal dual parity check data and diagonal parity, have been used to implement RAID Level 6."[14]
[edit] Galois fields
Two different syndromes need to be computed in order to allow the loss of any two drives. One of them, P can be the simple XOR of the data across the stripes, as with RAID 5. A second, independent syndrome is more complicated.
\mathbf{P} = \bigoplus_i{D_i} = \mathbf{D}_0 \;\oplus\; \mathbf{D}_1 \;\oplus\; \mathbf{D}_2 \;\oplus\; ... \;\oplus\; \mathbf{D}_{n-1}
\mathbf{Q} = \bigoplus_i{g^iD_i} = g^0\mathbf{D}_0 \;\oplus\; g^1\mathbf{D}_1 \;\oplus\; g^2\mathbf{D}_2 \;\oplus\; ... \;\oplus\; g^{n-1}\mathbf{D}_{n-1}
In the formula above,[15] the calculation of P is just the XOR of each stripe. In the Galois field arithmethic, the + operation is just bitwise XOR.
The calculation of Q also involves a generator, which is mixed in using the mathematical equivalent of a linear feedback shift register indicated using the dot operator. The generator is simply a value such that gn doesn't repeat.
If one data drive is lost, the data can be recomputed from P just like with RAID 5. If two data drives are lost, the data can be recovered from P and Q using a more complex process.
The computation of Q is CPU intensive, in contrast to the simplicity of P. Thus, a RAID 6 implemented in software will have a more significant effect on system performance, and a hardware solution will be more complex.
[edit] Non-standard RAID levels and non-RAID drive architectures
Main articles: Non-standard RAID levels and Non-RAID drive architectures
There are other RAID levels that are promoted by individual vendors, but not generally standardized. The non-standard RAID levels 5E, 5EE and 6E extend RAID 5 and 6 with hot-spare drives.
Other non-standard RAID levels include: RAID 1.5, RAID 7, RAID-DP, RAID S or parity RAID, Matrix RAID, RAID-K, RAID-Z, RAIDn, Linux MD RAID 10, IBM ServeRAID 1E, unRAID, and Drobo BeyondRAID.
Diagram of a RAID 0 setup.
A RAID 0 (also known as a stripe set or striped volume) splits data evenly across two or more disks (striped) with no parity information for redundancy. It is important to note that RAID 0 was not one of the original RAID levels and provides no data redundancy. RAID 0 is normally used to increase performance, although it can also be used as a way to create a small number of large virtual disks out of a large number of small physical ones.
A RAID 0 can be created with disks of differing sizes, but the storage space added to the array by each disk is limited to the size of the smallest disk. For example, if a 120 GB disk is striped together with a 100 GB disk, the size of the array will be 200 GB.
\begin{align} \mathrm{Size} & = 2 \cdot \min \left( 120\,\mathrm{GB}, 100\,\mathrm{GB} \right) \\ & = 2 \cdot 100\,\mathrm{GB} \\ & = 200\,\mathrm{GB} \end{align}
[edit] RAID 0 failure rate
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2010)
Although RAID 0 was not specified in the original RAID paper, an idealized implementation of RAID 0 would split I/O operations into equal-sized blocks and spread them evenly across two disks. RAID 0 implementations with more than two disks are also possible, though the group reliability decreases with member size.
Reliability of a given RAID 0 set is equal to the average reliability of each disk divided by the number of disks in the set:
\mathrm{MTTF}_{\mathrm{group}} \approx \frac{\mathrm{MTTF}_{\mathrm{disk}}}{\mathrm{number}}
That is, reliability (as measured by mean time to failure (MTTF) or mean time between failures (MTBF) is roughly inversely proportional to the number of members – so a set of two disks is roughly half as reliable as a single disk. If there were a probability of 5% that the disk would fail within three years, in a two disk array, that probability would be upped to \mathbb{P}(\mbox{at least one fails}) = 1 - \mathbb{P}(\mbox{neither fails}) = 1 - (1 - 0.05)^2 = 0.0975 = 9.75\,\%.
The reason for this is that the file system is distributed across all disks. When a drive fails the file system cannot cope with such a large loss of data and coherency since the data is "striped" across all drives (the data cannot be recovered without the missing disk). Data can be recovered using special tools, however, this data will be incomplete and most likely corrupt, and data recovery is typically very costly and not guaranteed.
[edit] RAID 0 performance
While the block size can technically be as small as a byte, it is almost always a multiple of the hard disk sector size of 512 bytes. This lets each drive seek independently when randomly reading or writing data on the disk. How much the drives act independently depends on the access pattern from the file system level. For reads and writes that are larger than the stripe size, such as copying files or video playback, the disks will be seeking to the same position on each disk, so the seek time of the array will be the same as that of a single drive. For reads and writes that are smaller than the stripe size, such as database access, the drives will be able to seek independently. If the sectors accessed are spread evenly between the two drives, the apparent seek time of the array will be half that of a single drive (assuming the disks in the array have identical access time characteristics). The transfer speed of the array will be the transfer speed of all the disks added together, limited only by the speed of the RAID controller. Note that these performance scenarios are in the best case with optimal access patterns.
RAID 0 is useful for setups such as large read-only NFS server where mounting many disks is time-consuming or impossible and redundancy is irrelevant.
RAID 0 is also used in some gaming systems where performance is desired and data integrity is not very important. However, real-world tests with games have shown that RAID-0 performance gains are minimal, although some desktop applications will benefit.[1][2] Another article examined these claims and concludes: "Striping does not always increase performance (in certain situations it will actually be slower than a non-RAID setup), but in most situations it will yield a significant improvement in performance." [3]
[edit] RAID 1
Diagram of a RAID 1 setup
A RAID 1 creates an exact copy (or mirror) of a set of data on two or more disks. This is useful when read performance or reliability are more important than data storage capacity. Such an array can only be as big as the smallest member disk. A classic RAID 1 mirrored pair contains two disks (see diagram), which increases reliability geometrically over a single disk. Since each member contains a complete copy of the data, and can be addressed independently, ordinary wear-and-tear reliability is raised by the power of the number of self-contained copies.
[edit] RAID 1 failure rate
This section does not cite any references or sources.
Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed. (March 2010)
As a trivial example, consider a RAID 1 with two identical models of a disk drive with a 5% probability that the disk would fail within three years. Provided that the failures are statistically independent, then the probability of both disks failing during the three year lifetime is
P(\mathrm{bothfail}) = \left(0.05\right)^2 = 0.0025 = 0.25\,\%.
Thus, the probability of losing all data is 0.25% if the first failed disk is never replaced. If only one of the disks fails, no data would be lost, assuming the failed disk is replaced before the second disk fails.
However, since two identical disks are used and since their usage patterns are also identical, their failures cannot be assumed to be independent. Thus, the probability of losing all data, if the first failed disk is not replaced, is considerably higher than 0.25% but still below 5%.
As a practical matter, in a well managed system the above is irrelevant because the failed hard drive will not be ignored. It will be replaced. The reliability of the overall system is determined by the probability the remaining drive will continue to operate through the repair period, that is the total time it takes to detect a failure, replace the failed hard drive, and for that drive to be rebuilt. If, for example, it takes one hour to replace the failed drive, the overall system reliability is defined by the probability the remaining drive will operate for one hour without failure.
It is worth noting that while RAID 1 can be an effective protection against physical disk failure, it does not provide protection against data corruption due to viruses, accidental file changes or deletions, or any other data-specific changes. By design, any such changes will be instantly mirrored to every drive in the array segment. A virus, for example, that damages data on one drive in a RAID 1 array will damage the same data on all other drives in the array at the same time. For this reason systems using RAID 1 to protect against physical drive failure should also have a traditional data backup process in place to allow data restoration to previous points in time. It would seem self-evident that any system critical enough to need the protection of disk redundancy is also a system critical enough to need the protection of reliable data backups.
[edit] RAID 1 performance
Since all the data exists in two or more copies, each with its own hardware, the read performance can go up roughly as a linear multiple of the number of copies. That is, a RAID 1 array of two drives can be reading in two different places at the same time, though not all implementations of RAID 1 do this.[4] To maximize performance benefits of RAID 1, independent disk controllers are recommended, one for each disk. Some refer to this practice as splitting or duplexing. When reading, both disks can be accessed independently and requested sectors can be split evenly between the disks. For the usual mirror of two disks, this would, in theory, double the transfer rate when reading. The apparent access time of the array would be half that of a single drive. Unlike RAID 0, this would be for all access patterns, as all the data are present on all the disks. In reality, the need to move the drive heads to the next block (to skip blocks already read by the other drives) can effectively mitigate speed advantages for sequential access. Read performance can be further improved by adding drives to the mirror. Many older IDE RAID 1 controllers read only from one disk in the pair, so their read performance is always that of a single disk. Some older RAID 1 implementations would also read both disks simultaneously and compare the data to detect errors. The error detection and correction on modern disks makes this less useful in environments requiring normal availability. When writing, the array performs like a single disk, as all mirrors must be written with the data. Note that these performance scenarios are in the best case with optimal access patterns.
RAID 1 has many administrative advantages. For instance, in some environments, it is possible to "split the mirror": declare one disk as inactive, do a backup of that disk, and then "rebuild" the mirror. This is useful in situations where the file system must be constantly available. This requires that the application supports recovery from the image of data on the disk at the point of the mirror split. This procedure is less critical in the presence of the "snapshot" feature of some file systems, in which some space is reserved for changes, presenting a static point-in-time view of the file system. Alternatively, a new disk can be substituted so that the inactive disk can be kept in much the same way as traditional backup. To keep redundancy during the backup process, some controllers support adding a third disk to an active pair. After a rebuild to the third disk completes, it is made inactive and backed up as described above.
[edit] RAID 2
A RAID 2 stripes data at the bit (rather than block) level, and uses a Hamming code for error correction. The disks are synchronized by the controller to spin in perfect tandem. Extremely high data transfer rates are possible. This is the only original level of RAID that is not currently used.
The use of the Hamming(7,4) code (four data bits plus three parity bits) also permits using 7 disks in RAID 2, with 4 being used for data storage and 3 being used for error correction.
RAID 2 is the only standard RAID level, other than some implementations of RAID 6, which can automatically recover accurate data from single-bit corruption in data. Other RAID levels can detect single-bit corruption in data, or can sometimes reconstruct missing data, but cannot reliably resolve contradictions between parity bits and data bits without human intervention.
(Multiple-bit corruption is possible though extremely rare. RAID 2 can detect but not repair double-bit corruption.)
All hard disks soon after implemented an error correction code that also used Hamming code, so RAID 2's error corrrection was now redundant and added unnecessary complexity. Like RAID 3, this level quickly became useless and it is now obsolete. There are no commercial applications of RAID 2.[5][6]
[edit] RAID 3
Diagram of a RAID 3 setup of 6-byte blocks and two parity bytes, shown are two blocks of data in different colors.
A RAID 3 uses byte-level striping with a dedicated parity disk. RAID 3 is very rare in practice. One of the side effects of RAID 3 is that it generally cannot service multiple requests simultaneously. This comes about because any single block of data will, by definition, be spread across all members of the set and will reside in the same location. So, any I/O operation requires activity on every disk and usually requires synchronized spindles.
In our example, a request for block "A" consisting of bytes A1-A6 would require all three data disks to seek to the beginning (A1) and reply with their contents. A simultaneous request for block B would have to wait.
However, the performance characteristic of RAID 3 is very consistent, unlike higher RAID levels,[clarification needed] the size of a stripe is less than the size of a sector or OS block so that, for both reading and writing, the entire stripe is accessed every time. The performance of the array is therefore identical to the performance of one disk in the array except for the transfer rate, which is multiplied by the number of data drives (i.e., less parity drives).
This makes it best for applications that demand the highest transfer rates in long sequential reads and writes, for example uncompressed video editing. Applications that make small reads and writes from random places over the disk will get the worst performance out of this level.[6]
The requirement that all disks spin synchronously, aka in lockstep, added design considerations to a level that didn't give significant advantages over other RAID levels, so it quickly became useless and is now obsolete.[5] Both RAID 3 and RAID 4 were quickly replaced by RAID 5.[7] However, this level has commercial vendors making implementations of it. It's usually implemented in hardware, and the performance issues are addressed by using large disks.[6]
[edit] RAID 4
Diagram of a RAID 4 setup with dedicated parity disk with each color representing the group of blocks in the respective parity block (a stripe)
A RAID 4 uses block-level striping with a dedicated parity disk. This allows each member of the set to act independently when only a single block is requested. If the disk controller allows it, a RAID 4 set can service multiple read requests simultaneously. RAID 4 looks similar to RAID 5 except that it does not use distributed parity, and similar to RAID 3 except that it stripes at the block level, rather than the byte level. Generally, RAID 4 is implemented with hardware support for parity calculations, and a minimum of 3 disks is required for a complete RAID 4 configuration.
In the example on the right, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.
For writing the parity disk becomes a bottleneck, as simultaneous writes to A1 and B2 would in addition to the writes to their respective drives also both need to write to the parity drive. In this way RAID example 4 places a very high load on the parity drive in an array.
The performance of RAID 4 in this configuration can be very poor, but unlike RAID 3 it does not need synchronized spindles. However, if RAID 4 is implemented on synchronized drives and the size of a stripe is reduced below the OS block size a RAID 4 array then has the same performance pattern as a RAID 3 array.
Currently, RAID 4 is only implemented at the enterprise level by one single company, NetApp, who solved the performance problems discussed above with their proprietary WAFL filesystem.[citation needed]
Both RAID 3 and RAID 4 were quickly replaced by RAID 5.[7]
[edit] RAID 5
Diagram of a RAID 5 setup with distributed parity with each color representing the group of blocks in the respective parity block (a stripe). This diagram shows left asymmetric algorithm
A RAID 5 uses block-level striping with parity data distributed across all member disks. RAID 5 has achieved popularity because of its low cost of redundancy. This can be seen by comparing the number of drives needed to achieve a given capacity. RAID 1 or RAID 1+0, which yield redundancy, give only s / 2 storage capacity, where s is the sum of the capacities of n drives used. In RAID 5, the yield is S_{\mathrm{min}} \times (n - 1) where Smin is the size of the smallest disk in the array. As an example, four 1-TB drives can be made into a 2-TB redundant array under RAID 1 or RAID 1+0, but the same four drives can be used to build a 3-TB array under RAID 5. Although RAID 5 is commonly implemented in a disk controller, some with hardware support for parity calculations (hardware RAID cards) and some using the main system processor (motherboard based RAID controllers), it can also be done at the operating system level, e.g., using Windows Dynamic Disks or with mdadm in Linux. A minimum of three disks is required for a complete RAID 5 configuration. In some implementations a degraded RAID 5 disk set can be made (three disk set of which only two are online), while mdadm supports a fully-functional (non-degraded) RAID 5 setup with two disks - which function as a slow RAID-1, but can be expanded with further volumes.
In the example, a read request for block A1 would be serviced by disk 0. A simultaneous read request for block B1 would have to wait, but a read request for B2 could be serviced concurrently by disk 1.
[edit] RAID 5 parity handling
A concurrent series of blocks (one on each of the disks in an array) is collectively called a stripe. If another block, or some portion thereof, is written on that same stripe, the parity block, or some portion thereof, is recalculated and rewritten. For small writes, this requires:
* Read the old data block
* Read the old parity block
* Compare the old data block with the write request. For each bit that has flipped (changed from 0 to 1, or from 1 to 0) in the data block, flip the corresponding bit in the parity block
* Write the new data block
* Write the new parity block
The disk used for the parity block is staggered from one stripe to the next, hence the term distributed parity blocks. RAID 5 writes are expensive in terms of disk operations and traffic between the disks and the controller.
The parity blocks are not read on data reads, since this would add unnecessary overhead and would diminish performance. The parity blocks are read, however, when a read of blocks in the stripe and within the parity block in the stripe are used to reconstruct the errant sector. The CRC error is thus hidden from the main computer. Likewise, should a disk fail in the array, the parity blocks from the surviving disks are combined mathematically with the data blocks from the surviving disks to reconstruct the data on the failed drive on-the-fly.
This is sometimes called Interim Data Recovery Mode. The computer knows that a disk drive has failed, but this is only so that the operating system can notify the administrator that a drive needs replacement; applications running on the computer are unaware of the failure. Reading and writing to the drive array continues seamlessly, though with some performance degradation...
[edit] RAID 5 disk failure rate
The maximum number of drives in a RAID 5 redundancy group is theoretically unlimited. The tradeoffs of larger redundancy groups are greater probability of a simultaneous double disk failure, the increased time to rebuild a redundancy group, and the greater probability of encountering an unrecoverable sector during RAID reconstruction. As the number of disks in a RAID 5 group increases, the mean time between failures (MTBF, the reciprocal of the failure rate) can become lower than that of a single disk. This happens when the likelihood of a second disk's failing out of N − 1 dependent disks, within the time it takes to detect, replace and recreate a first failed disk, becomes larger than the likelihood of a single disk's failing.
Solid-state drives (SSDs) may present a revolutionary instead of evolutionary way of dealing with increasing RAID-5 rebuild limitations. With encouragement from many flash-SSD manufacturers, JEDEC is preparing to set standards in 2009 for measuring UBER (uncorrectable bit error rates) and "raw" bit error rates (error rates before ECC, error correction code).[8] But even the economy-class Intel X25-M SSD claims an unrecoverable error rate of 1 sector in 1015 bits and an MTBF of two million hours.[9] Ironically, the much-faster throughput of SSDs (STEC claims its enterprise-class Zeus SSDs exceed 200 times the transactional performance of today's 15k-RPM, enterprise-class HDDs)[10] suggests that a similar error rate (1 in 1015) will result a two-magnitude shortening of MTBF.
In the event of a system failure while there are active writes, the parity of a stripe may become inconsistent with the data. If this is not detected and repaired before a disk or block fails, data loss may ensue as incorrect parity will be used to reconstruct the missing block in that stripe. This potential vulnerability is sometimes known as the write hole. Battery-backed cache and similar techniques are commonly used to reduce the window of opportunity for this to occur. The same issue occurs for RAID-6.
[edit] RAID 5 performance
RAID 5 implementations suffer from poor performance when faced with a workload which includes many writes which are smaller than the capacity of a single stripe. This is because parity must be updated on each write, requiring read-modify-write sequences for both the data block and the parity block. More complex implementations may include a non-volatile write back cache to reduce the performance impact of incremental parity updates.
Random write performance is poor, especially at high concurrency levels common in large multi-user databases. The read-modify-write cycle requirement of RAID 5's parity implementation penalizes random writes by as much as an order of magnitude compared to RAID 0.[11]
Performance problems can be so severe that some database experts have formed a group called BAARF — the Battle Against Any Raid Five.[12]
The read performance of RAID 5 is almost as good as RAID 0 for the same number of disks. Except for the parity blocks, the distribution of data over the drives follows the same pattern as RAID 0. The reason RAID 5 is slightly slower is that the disks must skip over the parity blocks.
[edit] RAID 5 usable size
Parity data uses up the capacity of one drive in the array (this can be seen by comparing it with RAID 4: RAID 5 distributes the parity data across the disks, while RAID 4 centralizes it on one disk, but the amount of parity data is the same). If the drives vary in capacity, the smallest of them sets the limit. Therefore, the usable capacity of a RAID 5 array is (N-1) \cdot S_{\mathrm{min}}, where N is the total number of drives in the array and Smin is the capacity of the smallest drive in the array.
The number of hard disks that can belong to a single array is limited only by the capacity of the storage controller in hardware implementations, or by the OS in software RAID. One caveat is that unlike RAID 1, as the number of disks in an array increases, the chance of data loss due to multiple drive failures is increased. This is because there is a reduced ratio of "losable" drives (the number of drives which may fail before data is lost) to total drives.
[edit] RAID 6
Diagram of a RAID 6 setup, which is identical to RAID 5 other than the addition of a second parity block
[edit] Redundancy and data loss recovery capability
RAID 6 extends RAID 5 by adding an additional parity block; thus it uses block-level striping with two parity blocks distributed across all member disks.
[edit] Performance (speed)
RAID 6 does not have a performance penalty for read operations, but it does have a performance penalty on write operations because of the overhead associated with parity calculations. Performance varies greatly depending on how RAID 6 is implemented in the manufacturer's storage architecture – in software, firmware or by using firmware and specialized ASICs for intensive parity calculations. It can be as fast as a RAID-5 system with one less drive (same number of data drives).[13]
[edit] Efficiency (potential waste of storage)
RAID 6 is no more space inefficient than RAID 5 with a hot spare drive when used with a small number of drives, but as arrays become bigger and have more drives the loss in storage capacity becomes less important and the probability of data loss is greater. RAID 6 provides protection against data loss during an array rebuild, when a second drive is lost, a bad block read is encountered, or when a human operator accidentally removes and replaces the wrong disk drive when attempting to replace a failed drive.
The usable capacity of a RAID 6 array is (N-2) \cdot S_{\mathrm{min}}, where N is the total number of drives in the array and Smin is the capacity of the smallest drive in the array.
[edit] Implementation
According to the Storage Networking Industry Association (SNIA), the definition of RAID 6 is: "Any form of RAID that can continue to execute read and write requests to all of a RAID array's virtual disks in the presence of any two concurrent disk failures. Several methods, including dual check data computations (parity and Reed-Solomon), orthogonal dual parity check data and diagonal parity, have been used to implement RAID Level 6."[14]
[edit] Galois fields
Two different syndromes need to be computed in order to allow the loss of any two drives. One of them, P can be the simple XOR of the data across the stripes, as with RAID 5. A second, independent syndrome is more complicated.
\mathbf{P} = \bigoplus_i{D_i} = \mathbf{D}_0 \;\oplus\; \mathbf{D}_1 \;\oplus\; \mathbf{D}_2 \;\oplus\; ... \;\oplus\; \mathbf{D}_{n-1}
\mathbf{Q} = \bigoplus_i{g^iD_i} = g^0\mathbf{D}_0 \;\oplus\; g^1\mathbf{D}_1 \;\oplus\; g^2\mathbf{D}_2 \;\oplus\; ... \;\oplus\; g^{n-1}\mathbf{D}_{n-1}
In the formula above,[15] the calculation of P is just the XOR of each stripe. In the Galois field arithmethic, the + operation is just bitwise XOR.
The calculation of Q also involves a generator, which is mixed in using the mathematical equivalent of a linear feedback shift register indicated using the dot operator. The generator is simply a value such that gn doesn't repeat.
If one data drive is lost, the data can be recomputed from P just like with RAID 5. If two data drives are lost, the data can be recovered from P and Q using a more complex process.
The computation of Q is CPU intensive, in contrast to the simplicity of P. Thus, a RAID 6 implemented in software will have a more significant effect on system performance, and a hardware solution will be more complex.
[edit] Non-standard RAID levels and non-RAID drive architectures
Main articles: Non-standard RAID levels and Non-RAID drive architectures
There are other RAID levels that are promoted by individual vendors, but not generally standardized. The non-standard RAID levels 5E, 5EE and 6E extend RAID 5 and 6 with hot-spare drives.
Other non-standard RAID levels include: RAID 1.5, RAID 7, RAID-DP, RAID S or parity RAID, Matrix RAID, RAID-K, RAID-Z, RAIDn, Linux MD RAID 10, IBM ServeRAID 1E, unRAID, and Drobo BeyondRAID.
Tuesday, May 5, 2009
Understanding Server Roles
Understanding the File Server Role
The file server role is a widely used role when configuring servers in Windows Server 2003 based networks. This is due to the file server role storing data for network users, and providing access to files stored on the file server. The file server role is though not available in the Windows Server 2003 Web Edition. A file stored on a file server volume can be accessed by users that have the necessary rights to access the directories wherein the files are stored.
File servers provide the following functionality to users:
• Enables users to store files in a centralized location.
• " Enable a user to share files with another user.
A few characteristics and features of the file server role are listed:
• Files and folder resources can be shared between network users.
• Administrators can manage the following aspects of file servers:
o Access to files and folders
o Disk space
o Disk quotas can be implemented to control the amount of space which users can utilize.
• For file servers that have NTFS volumes:
o NTFS security can be used to protect files from users who are not authorized to access the files and folders.
o Encrypting File System (EFS) enables users to encrypt files and folders, and entire data drives on NTFS formatted volumes. EFS secures confidential corporate data from unauthorized access.
o Distributed File System (Dfs) provides a single hierarchical file system that assists in organizing shared folders on multiple computers in the network. Dfs provides a single logical file system structure by concealing the underlying file share structure within a virtual folder structure. Users only see a single file structure even though there are multiple folders located on different file servers within the organization.
• The Offline files feature can be enabled if necessary. Offline Files make is possible for a user to mirror server files to a local laptop, and ensures that the laptop files and server files are in sync. For your laptop users, Offline Files ensures that the user can access the server based files when they are not connected to the network.
Understanding the Print Server Role
The print server role provides network printing capabilities for the network. Through the print server role, you can configure a server to manage printing functions on the network. Users typically connect to a network printer through a connection to a print server. The print server is the computer where the print drivers are located that manage printing between printers and client computers. With Windows NT, Windows 2000, Windows XP, and Windows Server 2003, the print servers supply clients with the necessary printer drivers. The print servers also manage communication between the printers and the client computers. The print servers manage the print queues, and can also supply audit logs on jobs printed by users. A network interface printer is a printer that connects to the network through a network card. The print server role is though not available in the Windows Server 2003 Web Edition.
When deciding on a print server, ensure that the print server has sufficient disk space to store print jobs waiting in the printer queue. It is recommended to use a dedicated, fast drive for the print spooler. You should consider implementing a print server cluster if your enterprise needs exceptional reliability and performance when it comes to printing.
A few characteristics of print servers are listed here:
• The Windows Management Instrumentation (WMI) a management application program interface (API) can be used to manage printing on the network.
• Print servers can also be remotely managed.
• Administrators can control when printing devices can be utilized.
• Administrators can control access to printers
• Priorities can be defined for print jobs.
• Print jobs can be paused, resumed, and deleted and viewed.
• Printers can be published in Active Directory so that access to printers can be controlled according to Active Directory accounts.
Understanding Web servers
The application server role makes Web applications and distributed applications available to users. A Web server typically contains a copy of a World Wide Web site and can also host Web based applications. When you install a Web server, users can utilize Web based applications and download files as well.
When you add a Web server through the application server role, the following components are installed:
• Internet Information Services 6.0
• The Application Server console
• The Distributed Transaction Coordinator (DTC)
• COM+, the extension of the Component Object Model (COM)
Internet Information Services 6.0 (IIS 6.0) is Microsoft's integrated Web server that enables you to create and manage Web sites within your organization. Through IIS, you can create and manage Web sites, and share and distribute information over the Internet or intranet. With the introduction of Windows Server 2003, came the advent of Internet Information Services (IIS) 6. IIS 6 is included with the 32-bit version and the 64-bit versions of the Windows Server 2003 Editions. IIS 6 include support for a number of protocols and management tools which enable you to configure the server as a Web server, File Transfer Protocol (FTP) server or a Simple Mail Transport Protocol (SMTP) server. The management tools included with Windows Server 2003 allows you to manage Internet Information Services on the Windows Server 2003 product platforms.
Before you can deploy IIS 6 Web servers within your enterprise, you first need to install Windows Server 2003 or upgrade to Windows Server 2003. Only after Windows Server 2003 is deployed, are you able to install IIS 6 in your environment.
After Windows Server 2003 is installed, for all editions of Windows Server 2003 other than the Web Edition, you can install IIS 6 from the Configure Your Server Wizard. When you first log on after Windows Server 2003 is installed, the Manage Your Server Wizard is initiated. To start the Configure Your Server Wizard, choose the Add Or Remove A Role link. You next have to follow the prompts of the Configure Your Server Wizard to install the Application Server (IIS, ASP.NET) option.
The protocols supported by IIS 6.0, the Microsoft integrated Web server, are listed here:
• Hypertext Transfer Protocol (HTTP) is a TCP/IP application layer protocol used to connect to websites, and to create Web content. HTTP handles the publishing of static and dynamic Web content. A HTTP session consists of a connection, a HTTP request and a HTTP response
1. Port 80 is used for HTTP connections. The client establishes a TCP connection to the server by using a TCP three way handshake.
2. After the connection is established, the client sends a HTTP GET request message to the server.
3. The server sends the client the requested Web page.
4. HTTP Keep-Alives maintains the TCP connection between the client and server if it is enabled, so that the client can request additional pages.
5. If HTTP Keep-Alives is not enabled, the TCP connection is terminated after the requested page is downloaded.
• File Transfer Protocol (FTP) is a TCP/IP application layer protocol used for copying files to and from remote systems through the Transmission Control Protocol (TCP). FTP makes it possible for clients to upload and download files from a FTP server over an internetwork. Through IIS, you can create and administer FTP servers. You need an FTP server and FTP client to use the protocol. A FTP session has a connection, a request, and a response.
1. The client establishes a TCP connection to the FTP server through port 21.
2. A port number over 1023 is assigned to the client.
3. The client sends a FTP command to port 21.
4. If the client needs to receive data, another connection is created with the client, to convey the data. This connection utilizes port 20.
5. The second connection remains in a TIME_WAIT state after the data is transferred to the client. The TIME_WAIT state makes it possible for additional data to be transferred. The TIME_WAIT state ends when the connection timeout.
• Network News Transfer Protocol (NNTP) is a TCP/IP application layer protocol used to send network news messages to NNTP servers and NNTP clients on the Internet. NNTP is a client/server and server/server protocol. The NNTP protocol enables a NNTP host to replicate its list of newsgroups and messages with another host through newsfeeds, using a push method or a pull method. A NNTP client can establish a connection with a NNTP host to download a list of newsgroups, and read the messages contained in the newsgroups. Through NNTP, you can implement private news servers to host discussion groups, or you can implement public news servers to provide customer support and help resources to Internet users. You can specify that users need to be authenticated to both read and post items to newsgroups, or you can allow access to everybody. The NNTP service can also integrate with the Windows Indexing Service for the indexing of newsgroup content. It is also fully integrated with event and performance monitoring of Windows Server 2003.
• Simple Mail Transfer Protocol (SMTP) is a TCP/IP application layer protocol used for routing and transferring e-mail between SMTP hosts on the Internet. SMTP enables IIS machines to operate as SMTP hosts to forward e-mail over the Internet. IIS can be utilized instead of Sendmail. SMTP also enables IIS machines to protect mail servers such as Microsoft Exchange servers from malicious attacks by operating between these servers and Sendmail host at the ISP of the organization. SMTP can be used to forward mail from one SMTP host to another SMTP host. SMTP cannot deliver mail directly to the client. Mail clients use POP3 or IMAP to receive e-mail. Windows Server 2003 includes the POP3 service for providing clients with mailboxes, and for handling incoming e-mail. To use the SMTP as a component of IIS, you have to install the SMTP service first if you are running a Windows Server 2003 Edition other than the Windows Server 2003 Web Edition. The SMTP service is installed on the Windows Server 2003 Web Edition by default.
Understanding the Mail Server Role
The mail server role provides e-mail services for the network, by providing the functionality needed for users to both send and receive e-mail messages. A mail server has to exist for users to send e-mail to each other. When a mail server receives e-mail for a user, it stores the e-mail for the intended user until that particular user retrieves it from the mail server.
The primary functions of mail servers are listed here:
• Store e-mail data.
• Process client requests
• Receive incoming e-mail from the Internet.
When you configure a server for the mail server role, the following TCP/IP based protocols are installed:
• Simple Mail Transfer Protocol (SMTP): SMTP is a TCP/IP application layer protocol used for routing and transferring e-mail between SMTP hosts on the Internet. IIS 6 has to be installed to install both the SMTP service and the Post Office Protocol 3 (POP3) service. The SMTP service has to be installed because mail servers and clients utilize this service to send e-mail.
• Post Office Protocol 3 (POP3): Mail clients use the POP3 service or IMAP to receive e-mail. Windows Server 2003 includes the POP3 service for providing clients with mailboxes, and for handling incoming e-mail. The POP3 service also enables clients to retrieve e-mail from the mail server.
Understanding the Terminal Server Role
Terminal Services have the ability to operate as an application server that remote clients can connect to, and run sessions from. The Terminal Services server runs the applications. The data response is transmitted back to the Terminal Services client. Clients can access Terminal Services over a local area connection or a wide area connection. Terminal Services clients can be MS-DOS based clients, Windows for Workgroups clients, (version 3.11), Windows based terminals, and Macintosh clients.
When a user connects to a Windows Server 2003 server using Remote Desktop, the resources of the server is used, and not that of the workstation. The terminal is only responsible for the keyboard, mouse and the display. Every user has its own individual Terminal Services session. Sessions are unique and do not affect one another. In this manner, a user connecting to a Windows Server 2003 server through Remote Desktop functions as a terminal on that server.
Once a client establishes a connection to Terminal Services, it creates a Terminal Services session for the client. All processing is handled by the Terminal Services server. Clients use insignificant bandwidth on the underlying network when they establish a connection. Terminal Services is therefore popular in WANs where bandwidth is limited. It is also suited for mobile users who have to execute processor intensive applications over a dial-up connection. In this case, the local machine only needs to handle the console. When applications need to be installed or updated, a single instance of the application can be installed or updated on the Terminal Services server. Users will have access to the application without you needing to install or update the application on all machines.
Remote Desktop Protocol (RDP) is the protocol that manages communications between a computer running Terminal Services, and a client computer running a Terminal Server client. The connection can be established using Terminal Services on a terminal server. The RDC utility can be used for complete terminal server client utilization, or it can be used for Remote Administration. Remote Desktop Connection is by default installed with Windows XP and Windows Server 2003. You can however install Remote Desktop Connection on the previous Windows Operating Systems (OSs) such as Windows 2000, Windows NT, Windows ME, Windows 98, and Windows 95. The RDC utility is backward compatible, and can therefore interact with Terminal Services in Windows XP, Windows 2000 and Windows NT 4 Terminal Server Edition.
Understanding the Remote Access and VPN Server Role
The Windows Server 2003 remote access and VPN server role can be used to provide remote access to clients through either of the methods:
• Dial-up connections: Dial-up networking makes it possible for a remote access client to establish a dial-up connection to a port on a remote access server. The configuration of the dial-up networking server determines what resources the remote user can access. Users that connect through a dial-up networking server, connect to the network much like a standard LAN user accessing network resources.
• Virtual private networks (VPNs): Virtual Private Networks (VPNs) provide secure and advanced connections through a non-secure network by providing data privacy. Private data is secure in a public environment. Remote access VPNs provides a common environment where many different sources such as intermediaries, clients and off-site employees can access through web browsers or email. Many companies supply their own VPN connections via the Internet. Through their ISPs, remote users running VPN client software are assured private access in a publicly shared environment. By using analog, ISDN, DSL, cable technology, dial and mobile IP; VPNs are implemented over extensive shared infrastructures. Email, database and office applications use these secure remote VPN connections.
A few features and capabilities provided by the RRAS server are listed here:
• LAN-to-LAN routing and LAN-to-WAN routing
• Virtual private network (VPN) routing
• Network Address Translation (NAT) routing: NAT, defined in RFC 1631 translates private addresses to Internet IP addresses that can be routed on the Internet
• Routing features, including
o IP multicasting
o Packet filtering
o Demand-dial routing
o DHCP relay
• Assign DHCP addresses to RRAS clients
• Remote Access Policies (RAPs): RAPs are used to grant remote access permissions.
• Layer Two Tunneling Protocol (L2TP) combines Layer 2 Forwarding (L2F) of Cisco with Point-to-Point Tunneling Protocol (PPTP) of Microsoft. L2TP is a Data-link protocol that can be used to establish Virtual Private Networks (VPNs).
• Internet Authentication Service (IAS), a Remote Authentication Dial-In User Service (RADIUS) server, provides remote authentication, authorization and accounting for users that are connecting to the network through a network access server (NAS) such as Windows Routing and Remote Access.
Understanding the Domain Controllers Role
A domain controller is a server that stores a write copy of Active Directory, and maintains the Active Directory data store. Active Directory was designed to provide a centralized repository of information, or data store that could securely manage the resources of an organization. The Active Directory directory services ensure that network resources are available to, and can be accessed by users, applications and programs. Active Directory also makes it possible for administrators to log on to a one network computer, and then manage Active Directory objects on a different computer within the domain.
A domain controller is a computer running Windows 2000 or Windows Server 2003 that contains a replica of the domain directory. Domain controllers in Active Directory maintain the Active Directory data store and security policy of the domain. Domain controllers therefore also provide security for the domain by authenticating user logon attempts.
The main functions of the domain controller role within Active Directory are listed here:
• Each domain controller in a domain stores and maintains a replica of the Active Directory data store for the particular domain.
• Domain controllers in Active Directory utilize multimaster replication. What this means is that no single domain controller is the master domain controller. All domain controllers are considered peers.
• Domain controllers also automatically replicate directory information for objects stored in the domain between one another.
• Updates that are considered important are replicated immediately to the remainder of the domain controllers within the domain.
• Implementing multiple domain controllers within the domain provides fault tolerance for the domain.
• In Active Directory, domain controllers can detect collisions. Collisions take place when an attribute modified on one particular domain, is changed on a different domain controller prior to the change on the initial domain controller being fully propagated.
Certain master roles can be assigned to domain controllers within a domain and forest. Domain controllers that are assigned special master roles are called Operations Masters. These domain controllers host a master copy of specific data in Active Directory. They also copy data to the remainder of the domain controllers. There are five different types of master roles that can be defined for domain controllers. Two types of master roles, forestwide master roles, are assigned to one domain controller in a forest. The other three master roles, domainwide master roles, are applied to a domain controller in every domain.
The different types of master roles which can be configured on domain controllers are listed here:
• The Schema Master is a forestwide master role applied to a domain controller that manages all changes in the Active Directory schema.
• The Domain Naming Master is a forestwide master role applied to a domain controller that manages changes to the forest, such as adding and removing a domain. The domain controller serving this role also manages changes to the domain namespace.
• The Relative ID (RID) Master is a domainwide master role applied to a domain controller that creates unique ID numbers for domain controllers and manages the allocation of these numbers.
• The PDC Emulator is a domainwide master role applied to a domain controller that operates like a Windows NT primary domain controller. This role is typically necessary when there are computers in your environment running pre-Windows 2000 and XP operating systems.
• The Infrastructure Master is a domainwide master role applied to a domain controller that manages changes made to group memberships.
A Global Catalog (GC) server(s) can also be installed on a domain controller. The global catalog is a central information store on the Active Directory objects in a forest and domain, and is used to improve performance when searching for objects in Active Directory. The first domain controller installed in a domain is designated as the global catalog server by default. The global catalog server stores a full replica of all objects in its host domain, and a partial replica of objects for the remainder of the domains in the forest. The partial replica contains those objects which are frequently searched for. It is generally recommended to configure a global catalog server for each site in a domain.
The functions of the global catalog server are summarized below:
• Global catalog servers are crucial for Active Directory's UPN functionality because they resolve user principal names (UPNs) when the domain controller handling the authentication request is unable to authenticate the user account because the user account actually exists in another domain. Here, the GC server assists in locating the user account so that the authenticating domain controller can proceed with the logon request for the user.
• The global catalog server deals with all search requests of users searching for information in Active Directory. It can find all Active Directory data irrespective of the domain in which the data is held. The GC server deals with requests for the entire forest.
• The global catalog server also makes it possible for users to provide Universal Group membership information to the domain controller for network logon requests.
Understanding the DNS Server Role
Domain Name Service (DNS) is a hierarchically distributed database that creates hierarchical names that can be resolved to IP addresses. The IP addresses are then resolved to MAC addresses. DNS provides the means for naming IP hosts, and for locating IP hosts when they are queried for by name.
The DNS server role resolves IP addresses to domain names, and domain name to IP addresses. In this way, DNS provides name resolution services to establish connections for those clients that need to resolve to IP addresses. A Fully Qualified Domain Name (FQDN) is the DNS name that is used to identify a computer on the network.
A DNS server is a computer running the DNS service or BIND; that provides domain name services. The DNS server manages the DNS database that is located on it. The information in the DNS database of a DNS server pertains to a portion of the DNS domain tree structure or namespace. This information is used to provide responses to client requests for name resolution. A DNS server is authoritative for the contiguous portion of the DNS namespace over which it resides.
When a DNS server is queried for name resolution services it can do either of the following:
• Respond to the request directly by providing the requested information.
• Provide a pointer (referral) to another DNS server that can assist in resolving the query.
• Respond that the information is unavailable.
• Respond that the information does not exist
You can configure different server roles for your DNS servers. The server role that you configure for a DNS server affects the following operations of the server:
• The way in which the DNS server stores DNS data.
• The way in which the DNS server maintains data.
• Whether the DNS data in the database file can be directly edited.
The different DNS server roles which you can configure are listed here:
• Standard Primary DNS server: This DNS server owns the zones defined in its DNS database, and can make changes to its zones. A standard primary DNS server obtains zone data from the local DNS database. The primary DNS server is authoritative for the zone data that it contains. When a change needs to be made to the resource records of the zone, it has to be done on the primary DNS server so that is can be included in the local zone database. A DNS primary server is created when a new primary zone is added.
• Standard Secondary DNS server: This DNS server obtains a read-only copy of zones through DNS zone transfers. A secondary DNS server cannot make any changes to the information contained in its read-only copy. A secondary DNS server can however resolve queries for name resolution. Secondary DNS servers are usually implemented to provide fault tolerance, provide fast access for clients in remote locations, and to distribute the DNS server processing load evenly. If a secondary DNS server is implemented, that DNS server can continue to handle queries when the primary DNS becomes unavailable. Secondary DNS servers also assist in reducing the processing load of the primary DNS server. It is recommended to install at least one primary DNS server, and one secondary DNS server for each DNS zone.
• Caching-only DNS server: A caching-only DNS server only performs queries and then stores the results of these queries. All information stored on the caching-only DNS server is therefore only that data which has been cached while the server performed queries. Caching-only DNS servers only cache information when the queries have been resolved. The information stored by caching-only DNS servers is the name resolution data that it has collected through name resolution queries. Caching-only DNS servers do not host zones and are not authoritative for any DNS domain.
• Master DNS servers: The DNS servers from which secondary DNS servers obtain zone information in the DNS hierarchy are called master DNS servers. When a secondary DNS server is configured, you have to specify the master server from whom it will obtain zone information. Zone transfer enables a secondary DNS server to obtain zone information from its configured primary DNS server. A secondary DNS server can also transfer its zone data to other secondary DNS servers, who are beneath it in the DNS hierarchy. Here, the secondary DNS server is regarded as the master server to the other subordinate secondary DNS servers. A secondary DNS server initiates the zone transfer process from its particular master server when it is brought online.
• Dynamic DNS Servers: Windows 2000, Windows XP and Windows Server 2003 computers can dynamically update the resource records of a DNS server when a client's IP addressing information is added, or renewed through Dynamic Host Configuration Protocol (DHCP). Both DHCP and Dynamic DNS (DDNS) updates make this possible. When dynamic DNS updates are enabled, a client sends a message to the DNS server when changes are made to its IP addressing data. This indicates to the DNS server that the A type resource record of the client needs to be updated.
Understanding the WINS Server Role
The Windows Internet Name Service (WINS) server roles provide name resolution services for clients that need to resolve IP addresses to NetBIOS names, and vice versa. A WINS server is an enhanced NetBIOS name server (NBNS) designed by Microsoft to resolve NetBIOS computer names to IP addresses. WINS can resolve NetBIOS names for local hosts and remote hosts. WINS registers NetBIOS computer names, and stores these client name registrations in the WINS database. The registrations are used when clients query for host name resolution and service information and to resolve a NetBIOS name to an IP address. Clients that are configured to utilize a WINS server as a NetBIOS name server (NBNS) are called WINS enabled clients. If the WINS server resolves the NetBIOS name to an IP address, no broadcast traffic is sent over the network. Broadcasts are only utilized if the WINS server is unable to resolve the NetBIOS name. A WINS enabled client can communicate with a WINS server that is located anywhere on the internetwork.
Since Windows 2000 was the first Windows operating system where NetBIOS naming was no longer required, you might still need to provide support for NetBIOS naming if you have legacy applications. Remember that all Windows operating system prior to Windows 2000 require NetBIOS name support.
To implement WINS, you only need one WINS server for an internetwork. However, implementing two WINS servers provides fault tolerance for name resolution. The secondary WINS server would be used for name resolution if the primary WINS server is unavailable to service WINS clients' requests.
A WINS server can cope with 1,500 name registrations and roughly 4,500 name queries per minute. It is recommended to have one WINS server and a backup server for each 10,000 WINS clients. When you configure the WINS server role, the WINS server must be statically assigned with the following TCP/IP parameters: static IP address, subnet mask and default gateway.
Understanding the DHCP Server Role
DHCP is a service and protocol which runs on a Windows Server 2003 operating system. DHCP functions at the application layer of the TCP/IP protocol stack. One of the primary tasks of the protocol is to automatically assign IP addresses to DHCP clients.
A server running the DHCP service is called a DHCP server. The DHCP protocol automates the configuration of TCP/IP clients because IP addressing occurs through the system. You can configure a server as a DHCP server so that the DHCP server can automatically assign IP addresses to DHCP clients, and with no manual intervention. IP addresses that are assigned through a DHCP server are regarded as dynamically assigned IP addresses.
The DHCP server assigns IP addresses from a predetermined IP address range(s), called a scope. A DHCP scope can be defined as a set of IP addresses which the DHCP server can allocate or assign to DHCP clients. A scope contains specific configuration information for clients that have IP addresses which are within the particular scope. Scope information for each DHCP server is specific to that particular DHCP server only, and is not shared between DHCP servers. Scopes for DHCP servers are configured by administrators.
The functions of the DHCP server are outlined below:
• Dynamically assign IP addresses to DHCP clients.
• Allocate the following TCP/IP configuration information to DHCP clients:
o Subnet mask information
o Default gateway IP addresses
o Domain Name System (DNS) IP addresses
o Windows Internet Naming Service (WINS) IP addresses
You can increase the availability of DHCP servers by using the 80/20 Rule if you have two DHCP servers located on different subnets. The 80/20 Rule is applied as follows:
• Allocate 80 percent of the IP addresses to the DHCP server which resides on the local subnet.
• Allocate 20 percent of the IP addresses to the DHCP Server on the remote subnet.
If the DHCP server that is allocated with 80 percent of the IP addresses has a failure, the remote DHCP server would resume assigning the DHCP clients with IP addresses.
With Windows Server 2003 DHCP, three options are available for registering IP addresses in DNS. The options can be configured for the DHCP server, or for each individual scope. The options which can be specified to enable/disable the DHCP service to dynamically update DNS records on behalf the client are:
• The DHCP server can be configured to not register any IP address of the DHCP clients when it assigns IP addresses to these clients.
• The DHCP server can be configured to at all times register all IP address of clients when they receive IP addresses from the DHCP server.
• The default option results in the DHCP server registering the IP addresses of clients with the authoritative DNS server, based on the client's request for an IP address.
Understanding the Streaming Media Server Role
The streaming media role provides media services so that clients can access streaming audio and video. The Windows Media Services is used to provide media services to clients. The Windows Media Services can be configured on server platforms, and on enterprise platforms.
The Windows Media Services is not available in the following edition of Windows Server 2003:
• Windows Server 2003 Web Edition
• Windows Server 2003 64-bit versions.
Understanding Certificate Authorities (CAs) Servers
A Certificate Authority is an entity that generates and validates digital certificates. The CA adds its own signature to the public key of the client. By using the tools provided by Microsoft, you can create an internal CA structure within your organization.
A digital certificate associates a public key with an owner. The certificate verifies the identity of the owner. A certificate cannot be forged because the authority that issued the certificate digitally signs the certificate. Certificates are issued for functions such as the encryption of data, code signing, Web user and Web server authentication, and for securing e-mail. Certificates in Windows XP and Windows Server 2003 are managed by the Data Protection API. When certificates are issued to a client, it is stored in the Registry and in Active Directory. You can also store certificates on smart cards. The information included in a certificate is determined by the type of certificate being used.
Certificate Authorities (CAs) are servers which are configured to issue certificates to users, computers, and services. CAs also manage certificates. An organization can have multiple CAs, which are arranged in a logical manner. A CA can be a trusted third party entity such as VeriSign or Thawte, or it can be an internal entity of the organization. An example of an internal CA entity is Windows Server 2003 Certificate Services. Windows Server 2003 Certificate Services can be used to create certificates for users and computers in Active Directory domains.
The functions performed by Certificate Authorities (CAs) are listed below:
• Accepts the request for a certificate from a user, computer, application, or service.
• Authenticates the identity of the user, computer or service requesting the certificate. The CA utilizes its policies, and incorporates the type of certificate being requested; to verify the identity of the requestor.
• Creates the certificate for the requestor.
• Digitally signs the certificate using its own private key.
Windows Certificate Services is used to create a Certificate Authority on Windows Server 2003 servers. The first CA that is installed becomes the root CA. The common practice is to first install the root CA, and then use the root CA to validate all the other CAs within the organization. A root CA is the most trusted CA in a CA hierarchy. When a root CA issues certificates to other CAs, these CAs become subordinate CAs of the root CA. When a root CA is online, it is used to issue certificates to subordinate CAs. The root CA never usually directly issues certificates to users, computers, applications or services.
A subordinate CA can also issue certificates to other subordinate CAs. These subordinate CAs are called intermediate CAs. While an intermediate CA is subordinate to the root CA, it is considered superior to those subordinate CAs to which it issued certificates. Subordinate CAs which only issue certificates to users, and not to other subordinate CAs, are called leaf CAs.
The type of CAs which you can install:
• Enterprise root CA: This is the topmost CA in the CA hierarchy, and is the first CA installed in the enterprise. Enterprise root CAs are reliant on Active Directory. Enterprise root CAs issue certificates to subordinate CAs.
• Enterprise Subordinate CA: This CA also needs Active Directory, and is used to issue certificates to users and computers.
• Stand-alone Root CA: A stand-alone root CA is the topmost CA in the certificate chain. A stand-alone root CA is not however dependent on Active Directory, and can be removed from the network. This makes a stand-alone root CAs the solution for implementing a secure offline root CA.
• Stand-alone Subordinate CA: This type of CA is also not dependent on Active Directory, and is used to issue certificates to users, computers, and other CAs.
Understanding the Configure Your Server Wizard
The Configure Your Server Wizard is one of the main wizards used to perform administrative tasks for Windows Server 20033 computers. The Configure Your Server Wizard is used to configure server roles. Windows Server 2003 provides a new tool for defining and managing server roles, namely, the Manage Your Server utility. The actual Wizard for applying the server roles to computers is the Configure Your Server Wizard. The Configure Your Server Wizard is included within the Manage Your Server utility and is also managed through this utility.
To access the Manage Your Server utility and use the Configure Your Server Wizard,
1. Click Start, click Administrative Tools, and then click Manage Your Server.
The main screen of the Manage Your Server utility is made up as follows:
• At the top of the Manage Your Server main screen, are three buttons, labelled as follows:
o Add or remove a role button; for initiating the Configure Your Server Wizard.
o Read about server roles button; for accessing information on server roles.
o Read about remote administration button; for accessing information on remote administration.
• The left end of the screen contains the server roles which are already configured for the particular server.
• Each listed configured server role is accompanied by buttons which can be used to view information on the existing role, or manage the existing server role. The buttons which are displayed differ between the existing server roles.
You can also initiate the Configure Your Server Wizard by:
1. Clicking Start, Administrative Tools, and then clicking Configure Your Server.
After the Configure Your Server Wizard is initiated, the following preliminary steps need to be performed first before any server roles can be added:
• Install all modems and network cards.
• Attach all necessary cables.
• Create an Internet connection if the server is to be used for Internet connectivity.
• Turn on all peripherals
• Have the Windows Server 2003 installation CD at hand.
Clicking the Next button on the Preliminary Steps screen invokes the Configure Your Server Wizard to test network connections and verify the operating system, and then displays the Server Role screen.
The Server Role screen contains the following columns:
• Server role column; indicates the server roles which can be added or removed.
• Configured column; indicates whether a server role is configured or not configured.
If you want to navigate to the Add or Remove Programs in Control Panel, click the Add or Remove Programs link on the Server Role screen.
How to add an application server role to Windows Server 2003
1. Click Start, click Administrative Tools, and then click Manage Your Server.
2. Click the Add or remove a role button.
3. The Configure Your Server Wizard initiates.
4. Click Next on the Preliminary Steps page of the wizard.
5. When the Server Role page opens, select the Application server (IIS, ASP.NET) server role, and then click Next.
6. The Application Server Options page opens.
7. Select the FrontPage Server Extensions checkbox to include Web server extensions in the configuration.
8. Select the Enable ASP.NET checkbox so that Web applications created through ASP.NET can be utilized. Click Next.
9. Verify the settings which you have selected on the Summary of Selections. Click Next.
10. The installation of the components occurs next.
11. Click Finish.
How to install the Remote Access and VPN server role using the using the Configure Your Server Wizard
1. Click Start, click Administrative Tools, and then click Manage Your Server.
2. Select the Add or remove a role option.
3. The Configure Your Server Wizard starts.
4. On the Preliminary Steps page, click Next.
5. A message appears, informing you that the Configure Your Server Wizard is detecting network settings and server information.
6. When the Server Role page appears, select the Remote Access/VPN Server option and then click Next.
7. On the Summary of Selections page, click Next.
8. The Welcome to the Routing and Remote Access Server Setup Wizard page is displayed.
How to add the global catalog server role on a domain controller
1. Click Start, Administrative Tools, and then click Active Directory Sites and Services.
2. In the console tree, expand Sites, and then expand the site that contains the domain controller which you want to configure as a global catalog server.
3. Expand the Servers folder, and locate and then click the domain controller that you want to designate as a global catalog server.
4. In the details, pane, right-click NTDS Settings and click Properties on the shortcut menu.
5. The NTDS Settings Properties dialog box opens.
6. The General tab is where you specify the domain controller as a global catalog server.
7. Enable the Global Catalog checkbox.
8. Click OK.
How to remove the global catalog server role from a domain controller
1. Open the Active Directory Sites and Services console.
2. In the console tree, locate and click the domain controller currently configured as the global catalog server.
3. Right-click NTDS Settings and click Properties on the shortcut menu to open the NTDS Settings Properties dialog box.
4. Clear the Global Catalog checkbox.
5. Click OK
How to install the DHCP server role
1. Click Start, Control Panel, and then click Add Or Remove Programs.
2. When the Add Or Remove Programs dialog box opens, click Add/Remove Windows Components.
3. This starts the Windows Components Wizard.
4. In the Components list box, select Networking Services, and then click the Details button.
5. The Networking Services dialog box opens.
6. In the Subcomponents Of Networking Services list box, check the Dynamic Host Configuration Protocol (DHCP) checkbox.
7. Click OK. Click Next.
8. When The Completing The Windows Components Wizard page is displayed, click Finish.
How to implement a caching-only DNS server
1. Open Control Panel
2. Double-click Add/Remove Programs., and then click Add/Remove Windows Components.
3. The Windows Components Wizard starts.
4. Click Networking Services, and then click Details.
5. In the Networking Services dialog box, select the checkbox for Domain Name System (DNS) in the list. Click OK. Click Next.
6. When The Completing The Windows Components Wizard page is displayed, click Finish.
7. Do not add or configure any zones for the DNS server. The DNS Server service functions as a caching-only DNS server by default. This basically means no configuration is necessary to set up a caching-only DNS server.
8. You should verify that the server root hints are configured correctly.
How to add the Terminal Services server role to Windows Server 2003 using Add Or Remove Programs in Control Panel
1. Click Start, Control Panel, and then click Add Or Remove Programs.
2. Click Add/Remove Windows Components to initiate the Windows Components Wizard
3. Select the Terminal Server checkbox. Click Next
4. When the Terminal Server Setup page is displayed, read the message on Terminal Server Licensing and Terminal Server mode. Click Next
5. Select the appropriate security setting. Click Next
6. After the necessary files are copied, click Finish.
7. When the System Settings Change page is displayed. Click Yes to reboot the computer.
8. Terminal Services Configuration, Terminal Services Manager, and Terminal Server Licensing are added to the Administrative Tools menu.
How to install IIS 6.0 using the Configure Your Server Wizard
1. Click Start, click Administrative Tools, and then click Manage Your Server.
2. In the Manage Your Server main screen, click Add or remove a role.
3. The Configure Your Server Wizard starts.
4. The Preliminary Steps screen is a warning screen that prompts you to verify that the requirements for the installation have been met. Click Next.
5. The network connections configured on the machine are tested and verified before the Wizard displays the following screen.
6. On the Configuration Options screen, choose one of the following options:
o Typical configuration for a first server: You would choose this option to install the server as a domain controller, and to install the Active Directory directory service, DNS service, and DHCP service.
o Custom Configuration, This option should be selected to install IIS 6 on the server.
Click Next.
7. On the Server Role screen, choose Application Server (IIS, ASP.NET) as the role which you want install on the server. From this screen, you can also select to install Terminal, Print, DNS, and DHCP services. Selecting the Application Server (IIS, ASP.NET) option, installs IIS, ASP.NET and additional components so that the server can host websites and FTP sites. Click Next.
8. On the Application Server Options screen, you can select that these optional components be installed:
o FrontPage Server Extensions, for users to develop Web content and publish Web content on the IIS machine via Microsoft FrontPage or Microsoft Visual Studio.
o Microsoft Data Engine, for hosting SQL databases on the IIS machine
o Enable ASP.NET: This option is enabled by default. ASP.NET is the scripting framework utilized for running IIS applications.
Click Next.
9. The Summary of Selections screen displays a summary of the components which you selected for installation. Verify that the correct items are listed on this screen. The Enable COM+ for remote transactions option is automatically added. Click Next.
10. The installation process now commences. You would either have to insert the Windows Server 2003 CD, or indicate the location of the installation files. The Application Selections screen is displayed, the Configuration Components window appears, and the necessary files are copied.
The file server role is a widely used role when configuring servers in Windows Server 2003 based networks. This is due to the file server role storing data for network users, and providing access to files stored on the file server. The file server role is though not available in the Windows Server 2003 Web Edition. A file stored on a file server volume can be accessed by users that have the necessary rights to access the directories wherein the files are stored.
File servers provide the following functionality to users:
• Enables users to store files in a centralized location.
• " Enable a user to share files with another user.
A few characteristics and features of the file server role are listed:
• Files and folder resources can be shared between network users.
• Administrators can manage the following aspects of file servers:
o Access to files and folders
o Disk space
o Disk quotas can be implemented to control the amount of space which users can utilize.
• For file servers that have NTFS volumes:
o NTFS security can be used to protect files from users who are not authorized to access the files and folders.
o Encrypting File System (EFS) enables users to encrypt files and folders, and entire data drives on NTFS formatted volumes. EFS secures confidential corporate data from unauthorized access.
o Distributed File System (Dfs) provides a single hierarchical file system that assists in organizing shared folders on multiple computers in the network. Dfs provides a single logical file system structure by concealing the underlying file share structure within a virtual folder structure. Users only see a single file structure even though there are multiple folders located on different file servers within the organization.
• The Offline files feature can be enabled if necessary. Offline Files make is possible for a user to mirror server files to a local laptop, and ensures that the laptop files and server files are in sync. For your laptop users, Offline Files ensures that the user can access the server based files when they are not connected to the network.
Understanding the Print Server Role
The print server role provides network printing capabilities for the network. Through the print server role, you can configure a server to manage printing functions on the network. Users typically connect to a network printer through a connection to a print server. The print server is the computer where the print drivers are located that manage printing between printers and client computers. With Windows NT, Windows 2000, Windows XP, and Windows Server 2003, the print servers supply clients with the necessary printer drivers. The print servers also manage communication between the printers and the client computers. The print servers manage the print queues, and can also supply audit logs on jobs printed by users. A network interface printer is a printer that connects to the network through a network card. The print server role is though not available in the Windows Server 2003 Web Edition.
When deciding on a print server, ensure that the print server has sufficient disk space to store print jobs waiting in the printer queue. It is recommended to use a dedicated, fast drive for the print spooler. You should consider implementing a print server cluster if your enterprise needs exceptional reliability and performance when it comes to printing.
A few characteristics of print servers are listed here:
• The Windows Management Instrumentation (WMI) a management application program interface (API) can be used to manage printing on the network.
• Print servers can also be remotely managed.
• Administrators can control when printing devices can be utilized.
• Administrators can control access to printers
• Priorities can be defined for print jobs.
• Print jobs can be paused, resumed, and deleted and viewed.
• Printers can be published in Active Directory so that access to printers can be controlled according to Active Directory accounts.
Understanding Web servers
The application server role makes Web applications and distributed applications available to users. A Web server typically contains a copy of a World Wide Web site and can also host Web based applications. When you install a Web server, users can utilize Web based applications and download files as well.
When you add a Web server through the application server role, the following components are installed:
• Internet Information Services 6.0
• The Application Server console
• The Distributed Transaction Coordinator (DTC)
• COM+, the extension of the Component Object Model (COM)
Internet Information Services 6.0 (IIS 6.0) is Microsoft's integrated Web server that enables you to create and manage Web sites within your organization. Through IIS, you can create and manage Web sites, and share and distribute information over the Internet or intranet. With the introduction of Windows Server 2003, came the advent of Internet Information Services (IIS) 6. IIS 6 is included with the 32-bit version and the 64-bit versions of the Windows Server 2003 Editions. IIS 6 include support for a number of protocols and management tools which enable you to configure the server as a Web server, File Transfer Protocol (FTP) server or a Simple Mail Transport Protocol (SMTP) server. The management tools included with Windows Server 2003 allows you to manage Internet Information Services on the Windows Server 2003 product platforms.
Before you can deploy IIS 6 Web servers within your enterprise, you first need to install Windows Server 2003 or upgrade to Windows Server 2003. Only after Windows Server 2003 is deployed, are you able to install IIS 6 in your environment.
After Windows Server 2003 is installed, for all editions of Windows Server 2003 other than the Web Edition, you can install IIS 6 from the Configure Your Server Wizard. When you first log on after Windows Server 2003 is installed, the Manage Your Server Wizard is initiated. To start the Configure Your Server Wizard, choose the Add Or Remove A Role link. You next have to follow the prompts of the Configure Your Server Wizard to install the Application Server (IIS, ASP.NET) option.
The protocols supported by IIS 6.0, the Microsoft integrated Web server, are listed here:
• Hypertext Transfer Protocol (HTTP) is a TCP/IP application layer protocol used to connect to websites, and to create Web content. HTTP handles the publishing of static and dynamic Web content. A HTTP session consists of a connection, a HTTP request and a HTTP response
1. Port 80 is used for HTTP connections. The client establishes a TCP connection to the server by using a TCP three way handshake.
2. After the connection is established, the client sends a HTTP GET request message to the server.
3. The server sends the client the requested Web page.
4. HTTP Keep-Alives maintains the TCP connection between the client and server if it is enabled, so that the client can request additional pages.
5. If HTTP Keep-Alives is not enabled, the TCP connection is terminated after the requested page is downloaded.
• File Transfer Protocol (FTP) is a TCP/IP application layer protocol used for copying files to and from remote systems through the Transmission Control Protocol (TCP). FTP makes it possible for clients to upload and download files from a FTP server over an internetwork. Through IIS, you can create and administer FTP servers. You need an FTP server and FTP client to use the protocol. A FTP session has a connection, a request, and a response.
1. The client establishes a TCP connection to the FTP server through port 21.
2. A port number over 1023 is assigned to the client.
3. The client sends a FTP command to port 21.
4. If the client needs to receive data, another connection is created with the client, to convey the data. This connection utilizes port 20.
5. The second connection remains in a TIME_WAIT state after the data is transferred to the client. The TIME_WAIT state makes it possible for additional data to be transferred. The TIME_WAIT state ends when the connection timeout.
• Network News Transfer Protocol (NNTP) is a TCP/IP application layer protocol used to send network news messages to NNTP servers and NNTP clients on the Internet. NNTP is a client/server and server/server protocol. The NNTP protocol enables a NNTP host to replicate its list of newsgroups and messages with another host through newsfeeds, using a push method or a pull method. A NNTP client can establish a connection with a NNTP host to download a list of newsgroups, and read the messages contained in the newsgroups. Through NNTP, you can implement private news servers to host discussion groups, or you can implement public news servers to provide customer support and help resources to Internet users. You can specify that users need to be authenticated to both read and post items to newsgroups, or you can allow access to everybody. The NNTP service can also integrate with the Windows Indexing Service for the indexing of newsgroup content. It is also fully integrated with event and performance monitoring of Windows Server 2003.
• Simple Mail Transfer Protocol (SMTP) is a TCP/IP application layer protocol used for routing and transferring e-mail between SMTP hosts on the Internet. SMTP enables IIS machines to operate as SMTP hosts to forward e-mail over the Internet. IIS can be utilized instead of Sendmail. SMTP also enables IIS machines to protect mail servers such as Microsoft Exchange servers from malicious attacks by operating between these servers and Sendmail host at the ISP of the organization. SMTP can be used to forward mail from one SMTP host to another SMTP host. SMTP cannot deliver mail directly to the client. Mail clients use POP3 or IMAP to receive e-mail. Windows Server 2003 includes the POP3 service for providing clients with mailboxes, and for handling incoming e-mail. To use the SMTP as a component of IIS, you have to install the SMTP service first if you are running a Windows Server 2003 Edition other than the Windows Server 2003 Web Edition. The SMTP service is installed on the Windows Server 2003 Web Edition by default.
Understanding the Mail Server Role
The mail server role provides e-mail services for the network, by providing the functionality needed for users to both send and receive e-mail messages. A mail server has to exist for users to send e-mail to each other. When a mail server receives e-mail for a user, it stores the e-mail for the intended user until that particular user retrieves it from the mail server.
The primary functions of mail servers are listed here:
• Store e-mail data.
• Process client requests
• Receive incoming e-mail from the Internet.
When you configure a server for the mail server role, the following TCP/IP based protocols are installed:
• Simple Mail Transfer Protocol (SMTP): SMTP is a TCP/IP application layer protocol used for routing and transferring e-mail between SMTP hosts on the Internet. IIS 6 has to be installed to install both the SMTP service and the Post Office Protocol 3 (POP3) service. The SMTP service has to be installed because mail servers and clients utilize this service to send e-mail.
• Post Office Protocol 3 (POP3): Mail clients use the POP3 service or IMAP to receive e-mail. Windows Server 2003 includes the POP3 service for providing clients with mailboxes, and for handling incoming e-mail. The POP3 service also enables clients to retrieve e-mail from the mail server.
Understanding the Terminal Server Role
Terminal Services have the ability to operate as an application server that remote clients can connect to, and run sessions from. The Terminal Services server runs the applications. The data response is transmitted back to the Terminal Services client. Clients can access Terminal Services over a local area connection or a wide area connection. Terminal Services clients can be MS-DOS based clients, Windows for Workgroups clients, (version 3.11), Windows based terminals, and Macintosh clients.
When a user connects to a Windows Server 2003 server using Remote Desktop, the resources of the server is used, and not that of the workstation. The terminal is only responsible for the keyboard, mouse and the display. Every user has its own individual Terminal Services session. Sessions are unique and do not affect one another. In this manner, a user connecting to a Windows Server 2003 server through Remote Desktop functions as a terminal on that server.
Once a client establishes a connection to Terminal Services, it creates a Terminal Services session for the client. All processing is handled by the Terminal Services server. Clients use insignificant bandwidth on the underlying network when they establish a connection. Terminal Services is therefore popular in WANs where bandwidth is limited. It is also suited for mobile users who have to execute processor intensive applications over a dial-up connection. In this case, the local machine only needs to handle the console. When applications need to be installed or updated, a single instance of the application can be installed or updated on the Terminal Services server. Users will have access to the application without you needing to install or update the application on all machines.
Remote Desktop Protocol (RDP) is the protocol that manages communications between a computer running Terminal Services, and a client computer running a Terminal Server client. The connection can be established using Terminal Services on a terminal server. The RDC utility can be used for complete terminal server client utilization, or it can be used for Remote Administration. Remote Desktop Connection is by default installed with Windows XP and Windows Server 2003. You can however install Remote Desktop Connection on the previous Windows Operating Systems (OSs) such as Windows 2000, Windows NT, Windows ME, Windows 98, and Windows 95. The RDC utility is backward compatible, and can therefore interact with Terminal Services in Windows XP, Windows 2000 and Windows NT 4 Terminal Server Edition.
Understanding the Remote Access and VPN Server Role
The Windows Server 2003 remote access and VPN server role can be used to provide remote access to clients through either of the methods:
• Dial-up connections: Dial-up networking makes it possible for a remote access client to establish a dial-up connection to a port on a remote access server. The configuration of the dial-up networking server determines what resources the remote user can access. Users that connect through a dial-up networking server, connect to the network much like a standard LAN user accessing network resources.
• Virtual private networks (VPNs): Virtual Private Networks (VPNs) provide secure and advanced connections through a non-secure network by providing data privacy. Private data is secure in a public environment. Remote access VPNs provides a common environment where many different sources such as intermediaries, clients and off-site employees can access through web browsers or email. Many companies supply their own VPN connections via the Internet. Through their ISPs, remote users running VPN client software are assured private access in a publicly shared environment. By using analog, ISDN, DSL, cable technology, dial and mobile IP; VPNs are implemented over extensive shared infrastructures. Email, database and office applications use these secure remote VPN connections.
A few features and capabilities provided by the RRAS server are listed here:
• LAN-to-LAN routing and LAN-to-WAN routing
• Virtual private network (VPN) routing
• Network Address Translation (NAT) routing: NAT, defined in RFC 1631 translates private addresses to Internet IP addresses that can be routed on the Internet
• Routing features, including
o IP multicasting
o Packet filtering
o Demand-dial routing
o DHCP relay
• Assign DHCP addresses to RRAS clients
• Remote Access Policies (RAPs): RAPs are used to grant remote access permissions.
• Layer Two Tunneling Protocol (L2TP) combines Layer 2 Forwarding (L2F) of Cisco with Point-to-Point Tunneling Protocol (PPTP) of Microsoft. L2TP is a Data-link protocol that can be used to establish Virtual Private Networks (VPNs).
• Internet Authentication Service (IAS), a Remote Authentication Dial-In User Service (RADIUS) server, provides remote authentication, authorization and accounting for users that are connecting to the network through a network access server (NAS) such as Windows Routing and Remote Access.
Understanding the Domain Controllers Role
A domain controller is a server that stores a write copy of Active Directory, and maintains the Active Directory data store. Active Directory was designed to provide a centralized repository of information, or data store that could securely manage the resources of an organization. The Active Directory directory services ensure that network resources are available to, and can be accessed by users, applications and programs. Active Directory also makes it possible for administrators to log on to a one network computer, and then manage Active Directory objects on a different computer within the domain.
A domain controller is a computer running Windows 2000 or Windows Server 2003 that contains a replica of the domain directory. Domain controllers in Active Directory maintain the Active Directory data store and security policy of the domain. Domain controllers therefore also provide security for the domain by authenticating user logon attempts.
The main functions of the domain controller role within Active Directory are listed here:
• Each domain controller in a domain stores and maintains a replica of the Active Directory data store for the particular domain.
• Domain controllers in Active Directory utilize multimaster replication. What this means is that no single domain controller is the master domain controller. All domain controllers are considered peers.
• Domain controllers also automatically replicate directory information for objects stored in the domain between one another.
• Updates that are considered important are replicated immediately to the remainder of the domain controllers within the domain.
• Implementing multiple domain controllers within the domain provides fault tolerance for the domain.
• In Active Directory, domain controllers can detect collisions. Collisions take place when an attribute modified on one particular domain, is changed on a different domain controller prior to the change on the initial domain controller being fully propagated.
Certain master roles can be assigned to domain controllers within a domain and forest. Domain controllers that are assigned special master roles are called Operations Masters. These domain controllers host a master copy of specific data in Active Directory. They also copy data to the remainder of the domain controllers. There are five different types of master roles that can be defined for domain controllers. Two types of master roles, forestwide master roles, are assigned to one domain controller in a forest. The other three master roles, domainwide master roles, are applied to a domain controller in every domain.
The different types of master roles which can be configured on domain controllers are listed here:
• The Schema Master is a forestwide master role applied to a domain controller that manages all changes in the Active Directory schema.
• The Domain Naming Master is a forestwide master role applied to a domain controller that manages changes to the forest, such as adding and removing a domain. The domain controller serving this role also manages changes to the domain namespace.
• The Relative ID (RID) Master is a domainwide master role applied to a domain controller that creates unique ID numbers for domain controllers and manages the allocation of these numbers.
• The PDC Emulator is a domainwide master role applied to a domain controller that operates like a Windows NT primary domain controller. This role is typically necessary when there are computers in your environment running pre-Windows 2000 and XP operating systems.
• The Infrastructure Master is a domainwide master role applied to a domain controller that manages changes made to group memberships.
A Global Catalog (GC) server(s) can also be installed on a domain controller. The global catalog is a central information store on the Active Directory objects in a forest and domain, and is used to improve performance when searching for objects in Active Directory. The first domain controller installed in a domain is designated as the global catalog server by default. The global catalog server stores a full replica of all objects in its host domain, and a partial replica of objects for the remainder of the domains in the forest. The partial replica contains those objects which are frequently searched for. It is generally recommended to configure a global catalog server for each site in a domain.
The functions of the global catalog server are summarized below:
• Global catalog servers are crucial for Active Directory's UPN functionality because they resolve user principal names (UPNs) when the domain controller handling the authentication request is unable to authenticate the user account because the user account actually exists in another domain. Here, the GC server assists in locating the user account so that the authenticating domain controller can proceed with the logon request for the user.
• The global catalog server deals with all search requests of users searching for information in Active Directory. It can find all Active Directory data irrespective of the domain in which the data is held. The GC server deals with requests for the entire forest.
• The global catalog server also makes it possible for users to provide Universal Group membership information to the domain controller for network logon requests.
Understanding the DNS Server Role
Domain Name Service (DNS) is a hierarchically distributed database that creates hierarchical names that can be resolved to IP addresses. The IP addresses are then resolved to MAC addresses. DNS provides the means for naming IP hosts, and for locating IP hosts when they are queried for by name.
The DNS server role resolves IP addresses to domain names, and domain name to IP addresses. In this way, DNS provides name resolution services to establish connections for those clients that need to resolve to IP addresses. A Fully Qualified Domain Name (FQDN) is the DNS name that is used to identify a computer on the network.
A DNS server is a computer running the DNS service or BIND; that provides domain name services. The DNS server manages the DNS database that is located on it. The information in the DNS database of a DNS server pertains to a portion of the DNS domain tree structure or namespace. This information is used to provide responses to client requests for name resolution. A DNS server is authoritative for the contiguous portion of the DNS namespace over which it resides.
When a DNS server is queried for name resolution services it can do either of the following:
• Respond to the request directly by providing the requested information.
• Provide a pointer (referral) to another DNS server that can assist in resolving the query.
• Respond that the information is unavailable.
• Respond that the information does not exist
You can configure different server roles for your DNS servers. The server role that you configure for a DNS server affects the following operations of the server:
• The way in which the DNS server stores DNS data.
• The way in which the DNS server maintains data.
• Whether the DNS data in the database file can be directly edited.
The different DNS server roles which you can configure are listed here:
• Standard Primary DNS server: This DNS server owns the zones defined in its DNS database, and can make changes to its zones. A standard primary DNS server obtains zone data from the local DNS database. The primary DNS server is authoritative for the zone data that it contains. When a change needs to be made to the resource records of the zone, it has to be done on the primary DNS server so that is can be included in the local zone database. A DNS primary server is created when a new primary zone is added.
• Standard Secondary DNS server: This DNS server obtains a read-only copy of zones through DNS zone transfers. A secondary DNS server cannot make any changes to the information contained in its read-only copy. A secondary DNS server can however resolve queries for name resolution. Secondary DNS servers are usually implemented to provide fault tolerance, provide fast access for clients in remote locations, and to distribute the DNS server processing load evenly. If a secondary DNS server is implemented, that DNS server can continue to handle queries when the primary DNS becomes unavailable. Secondary DNS servers also assist in reducing the processing load of the primary DNS server. It is recommended to install at least one primary DNS server, and one secondary DNS server for each DNS zone.
• Caching-only DNS server: A caching-only DNS server only performs queries and then stores the results of these queries. All information stored on the caching-only DNS server is therefore only that data which has been cached while the server performed queries. Caching-only DNS servers only cache information when the queries have been resolved. The information stored by caching-only DNS servers is the name resolution data that it has collected through name resolution queries. Caching-only DNS servers do not host zones and are not authoritative for any DNS domain.
• Master DNS servers: The DNS servers from which secondary DNS servers obtain zone information in the DNS hierarchy are called master DNS servers. When a secondary DNS server is configured, you have to specify the master server from whom it will obtain zone information. Zone transfer enables a secondary DNS server to obtain zone information from its configured primary DNS server. A secondary DNS server can also transfer its zone data to other secondary DNS servers, who are beneath it in the DNS hierarchy. Here, the secondary DNS server is regarded as the master server to the other subordinate secondary DNS servers. A secondary DNS server initiates the zone transfer process from its particular master server when it is brought online.
• Dynamic DNS Servers: Windows 2000, Windows XP and Windows Server 2003 computers can dynamically update the resource records of a DNS server when a client's IP addressing information is added, or renewed through Dynamic Host Configuration Protocol (DHCP). Both DHCP and Dynamic DNS (DDNS) updates make this possible. When dynamic DNS updates are enabled, a client sends a message to the DNS server when changes are made to its IP addressing data. This indicates to the DNS server that the A type resource record of the client needs to be updated.
Understanding the WINS Server Role
The Windows Internet Name Service (WINS) server roles provide name resolution services for clients that need to resolve IP addresses to NetBIOS names, and vice versa. A WINS server is an enhanced NetBIOS name server (NBNS) designed by Microsoft to resolve NetBIOS computer names to IP addresses. WINS can resolve NetBIOS names for local hosts and remote hosts. WINS registers NetBIOS computer names, and stores these client name registrations in the WINS database. The registrations are used when clients query for host name resolution and service information and to resolve a NetBIOS name to an IP address. Clients that are configured to utilize a WINS server as a NetBIOS name server (NBNS) are called WINS enabled clients. If the WINS server resolves the NetBIOS name to an IP address, no broadcast traffic is sent over the network. Broadcasts are only utilized if the WINS server is unable to resolve the NetBIOS name. A WINS enabled client can communicate with a WINS server that is located anywhere on the internetwork.
Since Windows 2000 was the first Windows operating system where NetBIOS naming was no longer required, you might still need to provide support for NetBIOS naming if you have legacy applications. Remember that all Windows operating system prior to Windows 2000 require NetBIOS name support.
To implement WINS, you only need one WINS server for an internetwork. However, implementing two WINS servers provides fault tolerance for name resolution. The secondary WINS server would be used for name resolution if the primary WINS server is unavailable to service WINS clients' requests.
A WINS server can cope with 1,500 name registrations and roughly 4,500 name queries per minute. It is recommended to have one WINS server and a backup server for each 10,000 WINS clients. When you configure the WINS server role, the WINS server must be statically assigned with the following TCP/IP parameters: static IP address, subnet mask and default gateway.
Understanding the DHCP Server Role
DHCP is a service and protocol which runs on a Windows Server 2003 operating system. DHCP functions at the application layer of the TCP/IP protocol stack. One of the primary tasks of the protocol is to automatically assign IP addresses to DHCP clients.
A server running the DHCP service is called a DHCP server. The DHCP protocol automates the configuration of TCP/IP clients because IP addressing occurs through the system. You can configure a server as a DHCP server so that the DHCP server can automatically assign IP addresses to DHCP clients, and with no manual intervention. IP addresses that are assigned through a DHCP server are regarded as dynamically assigned IP addresses.
The DHCP server assigns IP addresses from a predetermined IP address range(s), called a scope. A DHCP scope can be defined as a set of IP addresses which the DHCP server can allocate or assign to DHCP clients. A scope contains specific configuration information for clients that have IP addresses which are within the particular scope. Scope information for each DHCP server is specific to that particular DHCP server only, and is not shared between DHCP servers. Scopes for DHCP servers are configured by administrators.
The functions of the DHCP server are outlined below:
• Dynamically assign IP addresses to DHCP clients.
• Allocate the following TCP/IP configuration information to DHCP clients:
o Subnet mask information
o Default gateway IP addresses
o Domain Name System (DNS) IP addresses
o Windows Internet Naming Service (WINS) IP addresses
You can increase the availability of DHCP servers by using the 80/20 Rule if you have two DHCP servers located on different subnets. The 80/20 Rule is applied as follows:
• Allocate 80 percent of the IP addresses to the DHCP server which resides on the local subnet.
• Allocate 20 percent of the IP addresses to the DHCP Server on the remote subnet.
If the DHCP server that is allocated with 80 percent of the IP addresses has a failure, the remote DHCP server would resume assigning the DHCP clients with IP addresses.
With Windows Server 2003 DHCP, three options are available for registering IP addresses in DNS. The options can be configured for the DHCP server, or for each individual scope. The options which can be specified to enable/disable the DHCP service to dynamically update DNS records on behalf the client are:
• The DHCP server can be configured to not register any IP address of the DHCP clients when it assigns IP addresses to these clients.
• The DHCP server can be configured to at all times register all IP address of clients when they receive IP addresses from the DHCP server.
• The default option results in the DHCP server registering the IP addresses of clients with the authoritative DNS server, based on the client's request for an IP address.
Understanding the Streaming Media Server Role
The streaming media role provides media services so that clients can access streaming audio and video. The Windows Media Services is used to provide media services to clients. The Windows Media Services can be configured on server platforms, and on enterprise platforms.
The Windows Media Services is not available in the following edition of Windows Server 2003:
• Windows Server 2003 Web Edition
• Windows Server 2003 64-bit versions.
Understanding Certificate Authorities (CAs) Servers
A Certificate Authority is an entity that generates and validates digital certificates. The CA adds its own signature to the public key of the client. By using the tools provided by Microsoft, you can create an internal CA structure within your organization.
A digital certificate associates a public key with an owner. The certificate verifies the identity of the owner. A certificate cannot be forged because the authority that issued the certificate digitally signs the certificate. Certificates are issued for functions such as the encryption of data, code signing, Web user and Web server authentication, and for securing e-mail. Certificates in Windows XP and Windows Server 2003 are managed by the Data Protection API. When certificates are issued to a client, it is stored in the Registry and in Active Directory. You can also store certificates on smart cards. The information included in a certificate is determined by the type of certificate being used.
Certificate Authorities (CAs) are servers which are configured to issue certificates to users, computers, and services. CAs also manage certificates. An organization can have multiple CAs, which are arranged in a logical manner. A CA can be a trusted third party entity such as VeriSign or Thawte, or it can be an internal entity of the organization. An example of an internal CA entity is Windows Server 2003 Certificate Services. Windows Server 2003 Certificate Services can be used to create certificates for users and computers in Active Directory domains.
The functions performed by Certificate Authorities (CAs) are listed below:
• Accepts the request for a certificate from a user, computer, application, or service.
• Authenticates the identity of the user, computer or service requesting the certificate. The CA utilizes its policies, and incorporates the type of certificate being requested; to verify the identity of the requestor.
• Creates the certificate for the requestor.
• Digitally signs the certificate using its own private key.
Windows Certificate Services is used to create a Certificate Authority on Windows Server 2003 servers. The first CA that is installed becomes the root CA. The common practice is to first install the root CA, and then use the root CA to validate all the other CAs within the organization. A root CA is the most trusted CA in a CA hierarchy. When a root CA issues certificates to other CAs, these CAs become subordinate CAs of the root CA. When a root CA is online, it is used to issue certificates to subordinate CAs. The root CA never usually directly issues certificates to users, computers, applications or services.
A subordinate CA can also issue certificates to other subordinate CAs. These subordinate CAs are called intermediate CAs. While an intermediate CA is subordinate to the root CA, it is considered superior to those subordinate CAs to which it issued certificates. Subordinate CAs which only issue certificates to users, and not to other subordinate CAs, are called leaf CAs.
The type of CAs which you can install:
• Enterprise root CA: This is the topmost CA in the CA hierarchy, and is the first CA installed in the enterprise. Enterprise root CAs are reliant on Active Directory. Enterprise root CAs issue certificates to subordinate CAs.
• Enterprise Subordinate CA: This CA also needs Active Directory, and is used to issue certificates to users and computers.
• Stand-alone Root CA: A stand-alone root CA is the topmost CA in the certificate chain. A stand-alone root CA is not however dependent on Active Directory, and can be removed from the network. This makes a stand-alone root CAs the solution for implementing a secure offline root CA.
• Stand-alone Subordinate CA: This type of CA is also not dependent on Active Directory, and is used to issue certificates to users, computers, and other CAs.
Understanding the Configure Your Server Wizard
The Configure Your Server Wizard is one of the main wizards used to perform administrative tasks for Windows Server 20033 computers. The Configure Your Server Wizard is used to configure server roles. Windows Server 2003 provides a new tool for defining and managing server roles, namely, the Manage Your Server utility. The actual Wizard for applying the server roles to computers is the Configure Your Server Wizard. The Configure Your Server Wizard is included within the Manage Your Server utility and is also managed through this utility.
To access the Manage Your Server utility and use the Configure Your Server Wizard,
1. Click Start, click Administrative Tools, and then click Manage Your Server.
The main screen of the Manage Your Server utility is made up as follows:
• At the top of the Manage Your Server main screen, are three buttons, labelled as follows:
o Add or remove a role button; for initiating the Configure Your Server Wizard.
o Read about server roles button; for accessing information on server roles.
o Read about remote administration button; for accessing information on remote administration.
• The left end of the screen contains the server roles which are already configured for the particular server.
• Each listed configured server role is accompanied by buttons which can be used to view information on the existing role, or manage the existing server role. The buttons which are displayed differ between the existing server roles.
You can also initiate the Configure Your Server Wizard by:
1. Clicking Start, Administrative Tools, and then clicking Configure Your Server.
After the Configure Your Server Wizard is initiated, the following preliminary steps need to be performed first before any server roles can be added:
• Install all modems and network cards.
• Attach all necessary cables.
• Create an Internet connection if the server is to be used for Internet connectivity.
• Turn on all peripherals
• Have the Windows Server 2003 installation CD at hand.
Clicking the Next button on the Preliminary Steps screen invokes the Configure Your Server Wizard to test network connections and verify the operating system, and then displays the Server Role screen.
The Server Role screen contains the following columns:
• Server role column; indicates the server roles which can be added or removed.
• Configured column; indicates whether a server role is configured or not configured.
If you want to navigate to the Add or Remove Programs in Control Panel, click the Add or Remove Programs link on the Server Role screen.
How to add an application server role to Windows Server 2003
1. Click Start, click Administrative Tools, and then click Manage Your Server.
2. Click the Add or remove a role button.
3. The Configure Your Server Wizard initiates.
4. Click Next on the Preliminary Steps page of the wizard.
5. When the Server Role page opens, select the Application server (IIS, ASP.NET) server role, and then click Next.
6. The Application Server Options page opens.
7. Select the FrontPage Server Extensions checkbox to include Web server extensions in the configuration.
8. Select the Enable ASP.NET checkbox so that Web applications created through ASP.NET can be utilized. Click Next.
9. Verify the settings which you have selected on the Summary of Selections. Click Next.
10. The installation of the components occurs next.
11. Click Finish.
How to install the Remote Access and VPN server role using the using the Configure Your Server Wizard
1. Click Start, click Administrative Tools, and then click Manage Your Server.
2. Select the Add or remove a role option.
3. The Configure Your Server Wizard starts.
4. On the Preliminary Steps page, click Next.
5. A message appears, informing you that the Configure Your Server Wizard is detecting network settings and server information.
6. When the Server Role page appears, select the Remote Access/VPN Server option and then click Next.
7. On the Summary of Selections page, click Next.
8. The Welcome to the Routing and Remote Access Server Setup Wizard page is displayed.
How to add the global catalog server role on a domain controller
1. Click Start, Administrative Tools, and then click Active Directory Sites and Services.
2. In the console tree, expand Sites, and then expand the site that contains the domain controller which you want to configure as a global catalog server.
3. Expand the Servers folder, and locate and then click the domain controller that you want to designate as a global catalog server.
4. In the details, pane, right-click NTDS Settings and click Properties on the shortcut menu.
5. The NTDS Settings Properties dialog box opens.
6. The General tab is where you specify the domain controller as a global catalog server.
7. Enable the Global Catalog checkbox.
8. Click OK.
How to remove the global catalog server role from a domain controller
1. Open the Active Directory Sites and Services console.
2. In the console tree, locate and click the domain controller currently configured as the global catalog server.
3. Right-click NTDS Settings and click Properties on the shortcut menu to open the NTDS Settings Properties dialog box.
4. Clear the Global Catalog checkbox.
5. Click OK
How to install the DHCP server role
1. Click Start, Control Panel, and then click Add Or Remove Programs.
2. When the Add Or Remove Programs dialog box opens, click Add/Remove Windows Components.
3. This starts the Windows Components Wizard.
4. In the Components list box, select Networking Services, and then click the Details button.
5. The Networking Services dialog box opens.
6. In the Subcomponents Of Networking Services list box, check the Dynamic Host Configuration Protocol (DHCP) checkbox.
7. Click OK. Click Next.
8. When The Completing The Windows Components Wizard page is displayed, click Finish.
How to implement a caching-only DNS server
1. Open Control Panel
2. Double-click Add/Remove Programs., and then click Add/Remove Windows Components.
3. The Windows Components Wizard starts.
4. Click Networking Services, and then click Details.
5. In the Networking Services dialog box, select the checkbox for Domain Name System (DNS) in the list. Click OK. Click Next.
6. When The Completing The Windows Components Wizard page is displayed, click Finish.
7. Do not add or configure any zones for the DNS server. The DNS Server service functions as a caching-only DNS server by default. This basically means no configuration is necessary to set up a caching-only DNS server.
8. You should verify that the server root hints are configured correctly.
How to add the Terminal Services server role to Windows Server 2003 using Add Or Remove Programs in Control Panel
1. Click Start, Control Panel, and then click Add Or Remove Programs.
2. Click Add/Remove Windows Components to initiate the Windows Components Wizard
3. Select the Terminal Server checkbox. Click Next
4. When the Terminal Server Setup page is displayed, read the message on Terminal Server Licensing and Terminal Server mode. Click Next
5. Select the appropriate security setting. Click Next
6. After the necessary files are copied, click Finish.
7. When the System Settings Change page is displayed. Click Yes to reboot the computer.
8. Terminal Services Configuration, Terminal Services Manager, and Terminal Server Licensing are added to the Administrative Tools menu.
How to install IIS 6.0 using the Configure Your Server Wizard
1. Click Start, click Administrative Tools, and then click Manage Your Server.
2. In the Manage Your Server main screen, click Add or remove a role.
3. The Configure Your Server Wizard starts.
4. The Preliminary Steps screen is a warning screen that prompts you to verify that the requirements for the installation have been met. Click Next.
5. The network connections configured on the machine are tested and verified before the Wizard displays the following screen.
6. On the Configuration Options screen, choose one of the following options:
o Typical configuration for a first server: You would choose this option to install the server as a domain controller, and to install the Active Directory directory service, DNS service, and DHCP service.
o Custom Configuration, This option should be selected to install IIS 6 on the server.
Click Next.
7. On the Server Role screen, choose Application Server (IIS, ASP.NET) as the role which you want install on the server. From this screen, you can also select to install Terminal, Print, DNS, and DHCP services. Selecting the Application Server (IIS, ASP.NET) option, installs IIS, ASP.NET and additional components so that the server can host websites and FTP sites. Click Next.
8. On the Application Server Options screen, you can select that these optional components be installed:
o FrontPage Server Extensions, for users to develop Web content and publish Web content on the IIS machine via Microsoft FrontPage or Microsoft Visual Studio.
o Microsoft Data Engine, for hosting SQL databases on the IIS machine
o Enable ASP.NET: This option is enabled by default. ASP.NET is the scripting framework utilized for running IIS applications.
Click Next.
9. The Summary of Selections screen displays a summary of the components which you selected for installation. Verify that the correct items are listed on this screen. The Enable COM+ for remote transactions option is automatically added. Click Next.
10. The installation process now commences. You would either have to insert the Windows Server 2003 CD, or indicate the location of the installation files. The Application Selections screen is displayed, the Configuration Components window appears, and the necessary files are copied.
"Fizz-mo" servers
Consider the following scenario: For two years your organization has been operating a Windows 2000 Active Directory with eight domain controllers. Your budget request for replacement of the two oldest servers has been approved, and you have installed the new servers. Once they are up and running, you shut down and turn off the old servers and remove them from the rack. Now, a week later, you attempt to create a new domain in your forest, but Active Directory will not allow you to do it, even though you are a member of the Enterprise Administrators group. Still later, you try to install Exchange 2000, but this fails, too, because you cannot modify the schema, even though you are also a member of the Schema Admins group. What has gone wrong?
First, there are a few things you need to understand. Windows NT 4.0 networks use a single-master model, in which you have a Primary Domain Controller (PDC) and a number of Backup Domain Controllers (BDCs). With the advent of Active Directory, introduced with Windows 2000 Server, Microsoft moved to a multi-master model, in which you have a number of Domain Controllers, all of which are more or less equal, replicating information between each other. However, it turns out that not quite all the servers are equal. A few of them carry out unique and important roles within Active Directory. I'm going to take a look at each of these roles to see which functions they perform. This will help you see why you might have run into some of the problems mentioned above.
"Fizz-mo" servers
In addition to multi-master operations servers, Active Directory in both Windows 2000 and 2003 has what are called Flexible Single-Master Operations servers, or FSMO (pronounced "fizz-mo") for short. A FSMO server may have one or more of five possible roles within Active Directory. The reason for having these special servers is to help prevent conflicts within Active Directory. If only one server can control access to the schema, for instance, there will be no conflicts in the schema. The five roles found in FSMO servers are:
• Schema master: 1 per forest
• Domain naming master: 1 per forest
• Relative identifier master (RID): 1 per domain
• PDC emulator: 1 per domain
• Infrastructure master: 1 per domain
Two of these roles, schema master and domain naming master, are unique to each forest. In other words, there is only one schema master and one domain naming master in each forest. The other three are unique to each domain. So, for instance, there will be one infrastructure master in each domain within a forest. In a small network, with only one domain, it is possible that all five of these roles are found on the same domain controller. Or they could be split up, with per-forest roles on one server, and per-domain roles on one or more other domain controllers. These roles are placed by default on the first server that becomes a domain controller in the forest. However, an administrator may, and in some cases should, move the roles to another server. I will now discuss each of these roles in turn.
Schema operations master. 1 per forest
The schema is simply the structure of the AD database itself. If a change needs to be made to the schema after AD is installed, it is the schema master that controls those changes. You may never need to change the schema, in which case it won't matter whether the schema master is operational or not.
On the other hand, there are a few "AD-aware" applications on the market, such as Exchange 2000, which modify the AD schema as part of the installation process. It would seem likely that the number of these AD-aware applications would grow in the future. If the schema operations master is not available, you would not be able to install these applications.
There are a few things to remember about the schema operations master:
• There is only one schema operations master in the forest.
• By default, the first server in the forest has the schema operations master role.
• In order to change the schema or move the schema operations master role to another Server, you must be a member of the schema administrators group.
Domain naming operations master. 1 per forest
Although it may seem implausible, it is theoretically possible that two enterprise managers might try to create domains with the same name at the same time. To prevent such a conflict, the "domain naming operations master" governs the naming of domains in AD.
Here's what you need to remember about the domain naming operations master:
• There is only one domain naming operations master in the forest.
• By default, the first server in the forest has the domain naming operations master role.
• In order to create a domain or move the domain naming operations master role to another server, you must be a member of the Enterprise Administrators group.
• The domain naming operations master role must be placed on a domain controller that is also a Global Catalog server (remember that a Global Catalog server contains part of the schema, including domain names).
Relative ID operations master (RID). 1 per domain
A security identifier, or SID, uniquely identifies everything in a Windows NT/2000/2003 network. That SID is composed of two parts: three 32-bit numbers that are always the same within a given domain, and one 32-bit number that uniquely identifies a particular object. That last 32-bit number is called a "relative identifier," or RID.
One DC in each domain contains the RID operations master roles for that domain. Its function is to distribute pools of relative identifiers to all the DCs in the domain, to use when creating users, groups, computers, printers, etc. In that way, it ensures the uniqueness of every RID in that domain.
There are some different things that you should remember about the RID operations master:
• Unlike the last two operations master roles, there is one RID operations master in every domain in the forest (e.g., if you have three domains, then there are three RID operations masters in the forest).
• By default, the first server in a domain is the RID operations master.
• In order to move the RID operations master role to another server, you must be a member of the Domain Administrators group.
PDC emulator operations master. 1 per domain
There are times when workstations running Windows NT or Windows 9x will require access to a domain's primary domain controller (PDC). If these workstations are part of a Windows 2000 or 2003 network, there could be a problem, since there is no PDC. For this reason, another domain-level FSMO role is the PDC emulator. As the name implies, the DC containing this role emulates a PDC for those workstations running an OS earlier than Windows 2000.
But what if all your workstations are running either Windows 2000 Pro or Windows XP Pro? Do you still need a PDC emulator? The answer is yes.
Changes made to AD are automatically replicated to all domain controllers. But in a large network, this can take time. Often, that is okay, but there are two particular instances when you don't want to have to wait very long for replication: unlocking an account and changing a password. The reason, of course, is that the user cannot work until the change has been replicated and is in effect. Therefore, replication for these two events is forced immediately to the PDC emulator. If the local DC for that user determines that the account is locked or the password is incorrect, it will check the PDC emulator before denying logon. In this way, the user can get right to work.
Like the RID operations master, there is one PDC emulator per domain. By default, it is the first server in the domain, and you must be a Domain Administrator in order to move the role to another DC.
Infrastructure operations master. 1 per domain
The fifth and final FSMO role in Active Directory is the infrastructure operations master. This role is responsible for expediting replication of Active Directory changes across domains. If the infrastructure operations master is not available, replication will still take place, but it will take longer.
Like the RID and PDC emulator roles, there is one infrastructure operations master in every domain, and, by default, it is placed on the first DC in the domain.
However, there is something else that you must be aware of in placing the infrastructure operations master. It should not be placed on a DC that is also a Global Catalog server. The reason for this is very simple. The function of the infrastructure master is to query other domain controllers, update references found that are not in its own domain controller, and then replicate those updates to other domain controllers. Remember that the Global Catalog holds a partial replica of every object in the forest. If the infrastructure master is located on a Global Catalog server, it will never find references to objects that are not found on its own DC. Thus it will never replicate changes or updates.
Taking the next step
Flexible single-master operations roles in Active Directory help prevent conflicts, but can cause problems on your network if their function is interrupted for any length of time. That's why it's important to not only know exactly where those servers are in the network, but also to plan for their placement ahead of time. Moreover, you will need to know what to do if any of those functions are interrupted.
In part two of this article, I will discuss the placement of FSMO servers, how to transfer FSMO roles to another server if the FSMO server is functional, and how to move the role to another server if the original FSMO is no longer available.
First, there are a few things you need to understand. Windows NT 4.0 networks use a single-master model, in which you have a Primary Domain Controller (PDC) and a number of Backup Domain Controllers (BDCs). With the advent of Active Directory, introduced with Windows 2000 Server, Microsoft moved to a multi-master model, in which you have a number of Domain Controllers, all of which are more or less equal, replicating information between each other. However, it turns out that not quite all the servers are equal. A few of them carry out unique and important roles within Active Directory. I'm going to take a look at each of these roles to see which functions they perform. This will help you see why you might have run into some of the problems mentioned above.
"Fizz-mo" servers
In addition to multi-master operations servers, Active Directory in both Windows 2000 and 2003 has what are called Flexible Single-Master Operations servers, or FSMO (pronounced "fizz-mo") for short. A FSMO server may have one or more of five possible roles within Active Directory. The reason for having these special servers is to help prevent conflicts within Active Directory. If only one server can control access to the schema, for instance, there will be no conflicts in the schema. The five roles found in FSMO servers are:
• Schema master: 1 per forest
• Domain naming master: 1 per forest
• Relative identifier master (RID): 1 per domain
• PDC emulator: 1 per domain
• Infrastructure master: 1 per domain
Two of these roles, schema master and domain naming master, are unique to each forest. In other words, there is only one schema master and one domain naming master in each forest. The other three are unique to each domain. So, for instance, there will be one infrastructure master in each domain within a forest. In a small network, with only one domain, it is possible that all five of these roles are found on the same domain controller. Or they could be split up, with per-forest roles on one server, and per-domain roles on one or more other domain controllers. These roles are placed by default on the first server that becomes a domain controller in the forest. However, an administrator may, and in some cases should, move the roles to another server. I will now discuss each of these roles in turn.
Schema operations master. 1 per forest
The schema is simply the structure of the AD database itself. If a change needs to be made to the schema after AD is installed, it is the schema master that controls those changes. You may never need to change the schema, in which case it won't matter whether the schema master is operational or not.
On the other hand, there are a few "AD-aware" applications on the market, such as Exchange 2000, which modify the AD schema as part of the installation process. It would seem likely that the number of these AD-aware applications would grow in the future. If the schema operations master is not available, you would not be able to install these applications.
There are a few things to remember about the schema operations master:
• There is only one schema operations master in the forest.
• By default, the first server in the forest has the schema operations master role.
• In order to change the schema or move the schema operations master role to another Server, you must be a member of the schema administrators group.
Domain naming operations master. 1 per forest
Although it may seem implausible, it is theoretically possible that two enterprise managers might try to create domains with the same name at the same time. To prevent such a conflict, the "domain naming operations master" governs the naming of domains in AD.
Here's what you need to remember about the domain naming operations master:
• There is only one domain naming operations master in the forest.
• By default, the first server in the forest has the domain naming operations master role.
• In order to create a domain or move the domain naming operations master role to another server, you must be a member of the Enterprise Administrators group.
• The domain naming operations master role must be placed on a domain controller that is also a Global Catalog server (remember that a Global Catalog server contains part of the schema, including domain names).
Relative ID operations master (RID). 1 per domain
A security identifier, or SID, uniquely identifies everything in a Windows NT/2000/2003 network. That SID is composed of two parts: three 32-bit numbers that are always the same within a given domain, and one 32-bit number that uniquely identifies a particular object. That last 32-bit number is called a "relative identifier," or RID.
One DC in each domain contains the RID operations master roles for that domain. Its function is to distribute pools of relative identifiers to all the DCs in the domain, to use when creating users, groups, computers, printers, etc. In that way, it ensures the uniqueness of every RID in that domain.
There are some different things that you should remember about the RID operations master:
• Unlike the last two operations master roles, there is one RID operations master in every domain in the forest (e.g., if you have three domains, then there are three RID operations masters in the forest).
• By default, the first server in a domain is the RID operations master.
• In order to move the RID operations master role to another server, you must be a member of the Domain Administrators group.
PDC emulator operations master. 1 per domain
There are times when workstations running Windows NT or Windows 9x will require access to a domain's primary domain controller (PDC). If these workstations are part of a Windows 2000 or 2003 network, there could be a problem, since there is no PDC. For this reason, another domain-level FSMO role is the PDC emulator. As the name implies, the DC containing this role emulates a PDC for those workstations running an OS earlier than Windows 2000.
But what if all your workstations are running either Windows 2000 Pro or Windows XP Pro? Do you still need a PDC emulator? The answer is yes.
Changes made to AD are automatically replicated to all domain controllers. But in a large network, this can take time. Often, that is okay, but there are two particular instances when you don't want to have to wait very long for replication: unlocking an account and changing a password. The reason, of course, is that the user cannot work until the change has been replicated and is in effect. Therefore, replication for these two events is forced immediately to the PDC emulator. If the local DC for that user determines that the account is locked or the password is incorrect, it will check the PDC emulator before denying logon. In this way, the user can get right to work.
Like the RID operations master, there is one PDC emulator per domain. By default, it is the first server in the domain, and you must be a Domain Administrator in order to move the role to another DC.
Infrastructure operations master. 1 per domain
The fifth and final FSMO role in Active Directory is the infrastructure operations master. This role is responsible for expediting replication of Active Directory changes across domains. If the infrastructure operations master is not available, replication will still take place, but it will take longer.
Like the RID and PDC emulator roles, there is one infrastructure operations master in every domain, and, by default, it is placed on the first DC in the domain.
However, there is something else that you must be aware of in placing the infrastructure operations master. It should not be placed on a DC that is also a Global Catalog server. The reason for this is very simple. The function of the infrastructure master is to query other domain controllers, update references found that are not in its own domain controller, and then replicate those updates to other domain controllers. Remember that the Global Catalog holds a partial replica of every object in the forest. If the infrastructure master is located on a Global Catalog server, it will never find references to objects that are not found on its own DC. Thus it will never replicate changes or updates.
Taking the next step
Flexible single-master operations roles in Active Directory help prevent conflicts, but can cause problems on your network if their function is interrupted for any length of time. That's why it's important to not only know exactly where those servers are in the network, but also to plan for their placement ahead of time. Moreover, you will need to know what to do if any of those functions are interrupted.
In part two of this article, I will discuss the placement of FSMO servers, how to transfer FSMO roles to another server if the FSMO server is functional, and how to move the role to another server if the original FSMO is no longer available.
Subscribe to:
Posts (Atom)