QUESTION1
1. DESCRIBE SERVER UPGRADE AND MIGRATION CONSIDERATIONS
Whether you are upgrading or migrating to a new version of Windows, you must be aware of the following issues and considerations:
Application Compatibility
For more information about application compatibility in Windows, see the Application Compatibility Toolkit (ACT).
Multilingual Windows Image Upgrades
When performing multilingual Windows upgrades, cross-language upgrades are not supported by USMT. If you are upgrading or migrating an operating system with multiple language packs installed, you can upgrade or migrate only to the system default user interface (UI) language. For example, if English is the default but you have a Spanish language pack installed, you can upgrade or migrate only to English.
If you are using a single-language Windows image that matches the system default UI language of your multilingual operating system, the migration will work. However, all of the language packs will be removed, and you will have to reinstall them after the upgrade is completed.
Errorhandler.cmd
When upgrading from an earlier version of Windows, if you intend to use Errorhandler.cmd, you must copy this file into the %WINDIR%SetupScripts directory on the old installation. This makes sure that if there are errors during the down-level phase of Windows Setup, the commands in Errorhandler.cmd will run.
Data Drive ACL Migration
During the specialize configuration pass of Windows Setup, the root access control list (ACL) on drives formatted for NTFS that do not appear to have an operating system will be changed to the default Windows XP ACL format. The ACLs on these drives are changed to enable authenticated users to modify access on folders and files.
Changing the ACLs may affect the performance of Windows Setup if the default Windows XP ACLs are applied to a partition with a large amount of data. Because of these performance concerns, you can change the following registry value to disable this feature:
Key: HKLMSystemSetup
Type: REG_DWORD
Value: "DDACLSys_Disabled" = 1
QUESTION 2
2. Explain how to create a server upgrade and migration plan
Step 1: Find Target Servers
Understand that all of your IT infrastructure may not need migration. The first step is to distinguish which servers still run Windows Server 2003 and which are already upgraded. For small IT operations, assessing which servers need to be migrated can be relatively easy, but medium-sized locations may need some help. Microsoft offers an assessment and planning toolkit that can help you get started.
Step 2: Compose Affected Applications List
Once you know the servers you are targeting, assess the installed applications and workloads. For instance, do you have programs that currently run on a 32-bit Windows Server 2003 installation? Some of those programs may have issues when moving to a 64-bit environment such as Server 2012. Identify items that will be affected and list them. Now is the time to think about updating legacy applications and solutions.
In many cases this can be pretty straightforward. An old Exchange server, for example, you may just decide to move many of its functions to a cloud solution such as Office 365.
Step 3: Determine Risks
In a server migration, many things can unfortunately go wrong. If a mission-critical system becomes unavailable for an extended time, what are the financial and productivity costs? During this phase of planning, get to know your backups in case of unintended consequences. You don t want to find out that one key application no longer works post-migration and have no backup in place.
One way to help determine risk is to stage a migration in a controlled setting similar to your production environment. This allows you to see what can potentially go wrong and what you need to do to address those issues.
Step 4: Make a Rollback Plan
As its name implies, a rollback plan lets you roll back or revert any changes you made so everything goes back to the original state before you made any changes. It is important to have a rollback plan just in case something goes wrong. If downtime during the migration becomes unacceptably long, you should have a plan to revert back to Server 2003 and get everything returning to normal.
Microsoft offers a handy step-by-step guide for creating a rollback plan for Windows Server 2003 systems.
Analyze what went wrong and how you need to address the issue to ensure a smooth migration at a later time.
Step 5: Make an Execution Plan
Once you have found your target servers and affected applications, determined the risks, and created a rollback plan, proceed to create an execution plan. You can take the DIY approach and do everything in-house or go with a vendor-provided service. Microsoft also offers basic instructions on how to make an execution plan.
Question 3
3. Explain how to plan for server virtualization
1. Step 1: Determine the Virtualization Scope. …
2. Step 2: Create the List of Workloads. …
3. Step 3: Select the Backup and Fault-Tolerance Approaches for Each Workload. …
4. Step 4: Summarize and Analyze the Workload Requirements. …
5. Step 5: Design and Place Virtualization Host Hardware
Question 4
4. Explain the differences between the Windows Server 2012 editions
Edition Intent Major feature Licensing Clients
Datacenter Highly virtualized environments Unlimited virtual instance rights Processor x 2 Per CAL
Standard Little virtualization, low density Two virtual instances Processor x 2 Per CAL
Essentials Small business Simple administration, no virtualization rights Per Server 25 accounts
Foundation Entry level, economy server General purpose server, no virtualization rights Per Server 15 accounts
You might note that there are no major feature columns listed as there were in older versions of Windows Server. For example, in the past, if you wanted failover clustering, you needed to go with either the Enterprise or Data Center editions of Windows Server. With Windows Server, the only difference between Standard and Datacenter revolves around virtualization rights. Otherwise, both editions have the same exact feature sets and include:
Windows Server Failover Clustering
BranchCache Hosted Cache Server
Active Directory Federated Services
Additional Active Directory Certificate Services capabilities
Distributed File Services (support for more than 1 DFS root)
DFS-R Cross-File Replication
Question 5
5. Explain main differences between in- place upgrade and server migration
Upgrade
An upgrade refers to the process by which an existing TFS server is moved from one version to a newer version. Upgrades are always fully supported and are tested in many configurations before being released. In an upgrade, data on the server is transformed at the database level, and all data and metadata are preserved.
There are also multiple flavors of upgrades: In-Place and Migration-Based. An in-place upgrade is defined as an upgrade that when complete will use the same set of hardware that is running the current TFS version. A migration-based upgrade is defined as an upgrade involving a second, duplicate set of hardware which will host the new version of TFS when the process is complete. Note that despite having a similar name, a migration-based upgrade is NOT a Migration
Migration
A migration refers to the process of replaying actions from one system into another system. One of the key differences as compared to an upgrade is that a migration is a lower fidelity data transfer. In TFS, only version control and work item tracking data can be migrated between servers build data, reports, and numerous other pieces of metadata are not able to be migrated. In general, available migration tools have significantly less testing than the upgrade process, and most available tools have limited support (as they are released out of band).
In the case of a migration, the data transformations are done using only the public APIs, which are limited in providing only certain pieces of information while moving data. The result of these limitations is that some data is lost or distorted in the process of migration. Examples of this are artifact IDs (changeset numbers, work item IDs), date-time stamps, area paths, and iteration paths.
Question 6
6. Explain the benefits of a single forest model, and provide examples
Single-Forest, Single-Domain Models
The single-forest, single-domain model shown in the following figure for shared and dedicated hosting environments are the recommended hosting solution for Hosted Messaging and Collaboration service providers.
Figure: Single-forest, single-domain models
Single-Forest, Multiple-Domains Model
Because the single-forest, multiple-domains model shares a single forest, the Active Directory components that map to forest boundaries are shared between all of the domains within the forest, as shown in the following figure. These are:
Global catalog
Schema
Common configuration information
Schema master and domain naming master FMSO roles
Figure: Single-forest, multiple-domains model
Supports a company or reseller that requires changes to the domain-wide policies set for passwords, account lockout, and Kerberos ticket time-out settings.
Requires more control of, and reduction in replication traffic generated between, two geographically dispersed data centers that have minimal bandwidth between them. However, if this is the only reason, you may want to explore alternatives such as using Active Directory sites and partitioning data centers into sites.
Active Directory sites enable you to schedule replication traffic to occur during off-peak hours. However, if you need different domain-wide policies per data center because of bandwidth constraints or domain-wide security requirements, then the multiple domain models would be required.
Question 7
7. Explain the various components in a single AD DS forest model that are shared between all domain controllers in a forest, including:
Schema directory partition Configuration directory partition Global catalog directory partition Forest administrators
Trusts
Component Description
Organizational Units Organizational units are container objects. You use these container objects to arrange other objects in a manner that supports your administrative purposes. By arranging objects in organizational units, you make it easier to locate and manage them. You can also delegate the authority to manage an organizational unit. Organizational units can be nested in other organizational units.
You can arrange objects that have similar administrative and security requirements into organizational units. Organizational units provide multiple levels of administrative authority, so that you can apply Group Policy settings and delegate administrative control. This delegation simplifies the task of managing these objects and enables you to structure Active Directory to fit your organization s requirements.
Domains Domains are container objects. Domains are a collection of administratively defined objects that share a common directory database, security policies, and trust relationships with other domains. In this way, each domain is an administrative boundary for objects. A single domain can span multiple physical locations or sites and can contain millions of objects.
Domain Trees Domain trees are collections of domains that are grouped together in hierarchical structures. When you add a domain to a tree, it becomes a child of the tree root domain. The domain to which a child domain is attached is called the parent domain.
A child domain might in turn have its own child domain. The name of a child domain is combined with the name of its parent domain to form its own unique Domain Name System (DNS) name such as Corp.nwtraders.msft. In this manner, a tree has a contiguous namespace.
Forests A forest is a complete instance of Active Directory. Each forest acts as a top-level container in that it houses all domain containers for that particular Active Directory instance. A forest can contain one or more domain container objects, all of which share a common logical structure, global catalog, directory schema, and directory configuration, as well as automatic two-way transitive trust relationships. The first domain in the forest is called the forest root domain. The name of that domain refers to the forest, such as Nwtraders.msft. By default, information in Active Directory is shared only within the forest. In this way, the forest is a security boundary for the information that is contained in that instance of Active Directory.
Site Objects Sites are leaf and container objects. The sites container is the topmost object in the hierarchy of objects that are used to manage and implement Active Directory replication. The sites container stores the hierarchy of objects that are used by the Knowledge Consistency Checker (KCC) to effect the replication topology. Some of the objects located in the sites container include NTDS Site Settings objects, subnet objects, connection objects, server objects, and site objects (one site object for each site in the forest). The hierarchy is displayed as the contents of the Sites container, which is a child of the Configuration container.
Schema directory partition
The schema is stored in its own partition (the schema directory partition). The schema directory partition is replicated among all the domain controllers in the forest, and any change that is made to the schema is replicated to every domain controller in the forest. Because the schema dictates how information is stored, and because any changes that are made to the schema affect every domain controller, changes to the schema should be made only when necessary through a tightly controlled process after testing has been performed to ensure that there will be no adverse effects on the rest of the forest.
Aspects
The following is a list of all aspects that are part of this managed entity:
Name Description
Schema Extension Validation
When a schema change is made, Active Directory Domain Services (AD DS) validates the schema change and rejects the request if any error is found.
Schema Operations
Schema operations include the following:
Updating the schema cache
Updating the schema index
Implementing schema modifications
Maintaining schema integrity
Configuration directory partition
The configuration directory partition is created initially when the first Windows 2000 domain is created during the installation of Active Directory;thereafter, it is replicated to every domain controller in the forest. When a child domain or a new tree-root domain is created in the forest or when an additional domain controller is added to an existing domain, the configuration directory partition is copied to the new domain controller.
Global catalog directory partition
The global catalog is a distributed data repository that contains a searchable, partial representation of every object in every domain in a multidomain Active Directory Domain Services (AD DS) forest. The global catalog is stored on domain controllers that have been designated as global catalog servers and is distributed through multimaster replication. Searches that are directed to the global catalog are faster because they do not involve referrals to different domain controllers.
Forest administrators
Schema Administrator
Maintains security and integrity of schema
Oversees modifications to schema
Full disaster recovery plan and practice of schema recovery
Enterprise Administrator
Creation and management of the forest
Overall security and reliability of the forest
Creation and removal of domains
Management of trust relationship with ALS domain
Management of trust relationship with JGI-OSF domain
Full disaster recovery plan and practice of trust recovery
Domain Administrator
Creation and management of directory infrastructure
o Includes FSMO roles, trusts, Kerberos KDCs, replication topology, etc.
o Creation of all top-level OU hierarchies with LBL standard sub-OUs, groups, and appropriate security permissions. This includes adding the OU Admins to the AddComputers group, Group Policy Creator Owners group, and OU Admins mail list. It also includes setting appropriate permissions on the created objects and linking of default GPOs.
Monitoring and reporting associated with the reliability and security of the domain
o Use the domain admin account only for actions that require the privilege level of this account
o Monitoring changes to domain root and domain controllers OU to ensure unauthorized changes do not occur
o Day-to-day management of the domain controllers
o Monitoring connectivity, synchronization, replication, netlogon, time services, FSMO roles, schema, NTDS database partitions, DNS settings, SRV records, and trust relationships
o Review DC event and security logs and take corrective actions
o Monitor and resolve security situations at all levels of the domain to ensure a stable and secure domain
Domain Controller Management
o Physical security of the domain controllers in IT Division space and oversite for all domain controllers
o Backups and restores on domain controllers
o Full disaster recovery plan and practice recovery of DCs and core Directory objects
Policy monitoring and compliance
o Apply and enforce LBL standard naming conventions for objects in the domain
o Comply with LBL AD policies and standards as defined on the AD Web Site
o Monitor compliance with LBL AD policies and standards as defined on the AD Web Site, including Change Management,
Communication and Coordination
o Arbitrate disputes between OU Admins
o Provide OU Admins with assistance when requested
o Coordination with the LBL Cyber Security group to ensure the LBL domain is secure
o Comply with all Cyber Security group orders regarding emergency conditions
o Work collectively with the OU administrators
Secure remote administration of the DCs and member servers managed by the Infrastructure Group
Manage group policy at root of domain and for Domain Controllers OU
Manage the root Users and the root Computers OUs
Install and manage security reporting tools used to monitor changes to the Active Directory
Coordinate and configure alarm distribution to OU Admins for OU-related events
Plan and manage all migrations and upgrades related to the AD or the DCs
Trusts
A trust is a relationship, which you establish between domains, that makes it possible for users in one domain to be authenticated by a domain controller in the other domain.
All Active Directory trusts between domains within a forest are transitive, two-way trusts. Therefore, both domains in a trust relationship are trusted. As shown in the following illustration, this means that if Domain A trusts Domain B and Domain B trusts Domain C, users from Domain C can access resources in Domain A (when they are assigned the proper permissions). Only members of the Domain Admins group can manage trust relationships.
Trust protocols
A domain controller authenticates users and applications using one of two protocols: the Kerberos version 5 (V5) protocol or NTLM. The Kerberos V5 protocol is the default protocol for computers in an Active Directory domain. If any computer in a transaction does not support the Kerberos V5 protocol, the NTLM protocol is used.
With the Kerberos V5 protocol, the client requests a ticket from a domain controller in its account domain to the server in the trusting domain. This ticket is issued by an intermediary that is trusted by the client and the server. The client presents this trusted ticket to the server in the trusting domain for authentication. For more information, see Kerberos V5 authentication
When a client tries to access resources on a server in another domain using NTLM authentication, the server that contains the resource must contact a domain controller in the client account domain to verify the account credentials.
Trusted domain objects
Trusted domain objects (TDOs) are objects that represent each trust relationship within a particular domain. Each time that a trust is established, a unique TDO is created and stored in its domain (in the System container). Attributes such as trust transitivity, type, and the reciprocal domain names are represented in the TDO.
Forest trust TDOs store additional attributes to identify all the trusted namespaces from its partner forest. These attributes include domain tree names, user principal name (UPN) suffixes, service principal name (SPN) suffixes, and security identifier (SID) namespaces.
Question 8
8. Explain the scenarios that are best suited for a single forest model
The deployment below is a simple environment with a single domain in a single forest, you can take it as an example:
Server1: Domain Controller and DNS server (only one NIC with a static IP address and DNS server points to itself)
Server2: Domain member (in the same subnet with Server1, DNS server points to Server1)
Install AD DS in server manager on server1, choose Create a new forest on Choose a Deployment Configuration page. On the Domain Controller Options page, select DNS server option and create the first domain controller with DNS role in a new forest.
Then promote server1 to a Domain Controller. After that, add Server2 to the domain
Question 9
9. Explain the reasons for implementing multiple forests, and provide examples
Total account security and partitioning requires a separate forest. Most companies can be configured in a single forest environment, but some agencies and companies require a higher level of security to protect from unauthorized and administrative attacks. This doesn't mean that every company should consider multiple forests; in fact, the opposite is true. But if you have a situation where there is sensitive data that you need to protect, it might be appropriate to build a separate forest for that data, leaving the user population in a separate forest. The same is true if you have resources that might need to be accessed from the DMZ. Instead of putting a domain controller of your production forest in the DMZ, you might consider a separate application domain and leave the sensitive data within the intranet on the production forest.
It's also important to recognize and secure the service admin accounts in your domain and forest to protect domain resources and objects from accidental or malicious attacks. There are key accounts in every domain that offer the greatest risk to security: <root>Enterprise Admins, <root>Schema Admins, <domain>Domain Admins, <domain>Administrators, <domain>Server Operators, <domain>Backup Operators, <domain>Account Operators, and even <domain>Print Operators. Some of these accounts have elevated permissions that can allow administrators to impersonate users; read, modify, or delete Windows-secured resource or configuration settings on machines joined to the domains in the forest; bypass, disable, or defeat system invariants such as ACLs, auditing, Flexible Single Master Operations (FSMOs), or read-only partitions; and cause changes to replicate to other DCs.
There are reasons apart from security to consider multiple forests in your organization. The nature of your business might require autonomy or isolation. Perhaps you need the ability to manage schemas differently within your organization. Strict legal requirements could also justify the use of a separate forest to ensure that some data is protected from other administrators. One of the most common reasons: a merger where a decentralized (and not fully trusted) administrative model is in place. You might also want to consider a multiple forest model if you're building an AD structure with the intent of separating it after a spin-off or sell-off.
Question 10
10. Describe the AD DS domain design models with examples Single domain
Single domain tree Multiple domain trees Regional domain Resource domain
Single domain tree
A single domain model is the easiest to administer and the least expensive to maintain. It consists of a forest that contains a single domain. This domain is the forest root domain, and it contains all of the user and group accounts in the forest.
A single domain forest model reduces administrative complexity by providing the following advantages:
Any domain controller can authenticate any user in the forest.
All domain controllers can be global catalogs, so you do not need to plan for global catalog server placement.
In a single domain forest, all directory data is replicated to all geographic locations that host domain controllers. While this model is the easiest to manage, it also creates the most replication traffic of the two domain models. Partitioning the directory into multiple domains limits the replication of objects to specific geographic regions but results in more administrative overhead.
Regional domain model
All object data within a domain is replicated to all domain controllers in that domain. For this reason, if your forest includes a large number of users that are distributed across different geographic locations connected by a wide area network (WAN), you might need to deploy regional domains to reduce replication traffic over the WAN links. Geographically based regional domains can be organized according to network WAN connectivity.
The regional domain model enables you to maintain a stable environment over time. Base the regions used to define domains in your model on stable elements, such as continental boundaries. Domains based on other factors, such as groups within the organization, can change frequently and might require you to restructure your environment.
The regional domain model consists of a forest root domain and one or more regional domains. Creating a regional domain model design involves identifying what domain is the forest root domain and determining the number of additional domains that are required to meet your replication requirements. If your organization includes groups that require data isolation or service isolation from other groups in the organization, create a separate forest for these groups. Domains do not provide data isolation or service isolation.
Resource domain
In Microsoft Windows NT, a type of domain in an enterprise networking environment that includes file, print, and other resources for users throughout the enterprise. Resource domains are part of a master domain model or multiple master domain model enterprise-level implementation of Windows NT. Resource domains simplify resource administration by separating the administration of resources from the administration of user accounts. In a master domain model implementation of Windows NT, an account domain or master domain contains user accounts for every user in the enterprise and is usually located at corporate headquarters. Servers and workstations at branch offices belong to other domains called resource domains. A trust relationship is established so that each resource domain in the enterprise trusts the account domain. Users at branch offices who want to log on to the network simply log on to the account domain even though their workstations are located within resource domains. Administrators at branch offices are responsible for managing only the resources (file and print shares, Web servers, database servers, and so forth) for their own domain and are not involved in account management.
Question 11
11. Explain how important it is to run GPO backups and reports on a regular basis
The GPMC provides mechanisms for backing up, restoring, migrating, and copying existing GPOs. These capabilities are very important for maintaining your Group Policy deployments in the event of an error or a disaster. They help you avoid having to manually recreate lost or damaged GPOs and then repeat the planning, testing, and deployment phases. Part of your ongoing Group Policy operations plan should include regular backups of all GPOs. Inform all Group Policy administrators about how to use the GPMC to restore GPOs.
The GPMC also provides for copying and importing GPOs, both from the same domain and across domains. You can use the GPMC to migrate an existing GPO, for example, from an existing domain into a newly deployed domain. You can either copy GPOs or import policy settings from one GPO into another GPO. Doing this can save you time and trouble by allowing you to re-use the contents of existing GPOs. Copying GPOs allows you to move straight from the staging phase to production, if you have configured the proper trust relationships between the environments. Importing GPOs allows you to transfer policy settings from a backed-up GPO into an existing GPO, and is especially useful in situations where a trust relationship is not present between the source and destination domains. If you want to reuse existing GPOs, copying also allows you to conveniently move GPOs from one production environment to another.
Using the GPMC to work with GPOs
To create GPO backups, you must have at least Read access to the GPOs and Write access to the folder in which the backups are stored. See Figure 6 to help you identify the items referred to in the procedures that follow.
Using the GPMC to back up GPOs and view GPO backups
The backup operation backs up a production GPO to the file system. The location of the backup can be any folder to which you have Write access. After backing up GPOs, you must use the GPMC to display and manipulate the contents of your backup folder, either by using the GPMC UI or programmatically by using a script. Do not interact with archived GPOs directly through the file system. After the GPOs are backed up, use the GPMC to process archived GPOs by using the Import and Restore operations.
Question 12
12. Explain the functions of the KCC and the Inter- Site Topology Generator (ISTG).
The KCC is a built-in process that runs on all domain controllers. It is a dynamic-link library that modifies data in the local directory in response to systemwide changes, which are made known to the KCC by changes to the data within Active Directory. The KCC generates and maintains the replication topology for replication within sites and between sites. The KCC has two major functions:
Configures replication connections (connection objects) between domain controllers. Each connection object defines incoming replication from a replication partner. Within a site, each KCC generates its own connections. For replication between sites, a single KCC per site generates all connections between sites. Converts the connection objects that represent inbound replication to the local domain controller into the replication agreements that are actually used by the replication engine.
Intersite Topology Generator Role
A fundamental concept in the generation of the topology within a site is that each server does its part to create a sitewide topology. In a similar manner, the generation of the topology between sites depends on each site doing its part to create a forest-wide topology between sites. As part of this effort, one domain controller per site assumes the role of the intersite topology generator. The KCC on this domain controller is responsible for creating the connections between the domain controllers in its site and the domain controllers in other sites, which includes specifically the inbound replication connection objects for all bridgehead servers in the site in which the domain controller is located. After the intersite topology generator assesses the topology and determines that its own site is the only site, it performs no further processing because no connections between sites are possible for the current configuration.
Question 13
13. Explain the implications on replication from the placement of RODCs and
global catalog servers
Global catalog placement requires planning except if you have a single-domain forest. In a single-domain forest, configure all domain controllers as global catalog servers. Because every domain controller stores the only domain directory partition in the forest, configuring each domain controller as a global catalog server does not require any additional disk space usage, CPU usage, or replication traffic. In a single-domain forest, all domain controllers act as virtual global catalog servers; that is, they can all respond to any authentication or service request. This special condition for single-domain forests is by design. Authentication requests do not require contacting a global catalog server as they do when there are multiple domains, and a user can be a member of a universal group that exists in a different domain. However, only domain controllers that are designated as global catalog servers can respond to global catalog queries on the global catalog port 3268. To simplify administration in this scenario and to ensure consistent responses, designating all domain controllers as global catalog servers eliminates the concern about which domain controllers can respond to global catalog queries. Specifically, any time a user uses StartSearchFor People or Find Printers or expands Universal Groups, these requests go only to the global catalog.
In multiple-domain forests, global catalog servers facilitate user logon requests and forest-wide searches. The following illustration shows how to determine which locations require global catalog servers.
In most cases, it is recommended that you include the global catalog when you install new domain controllers. The following exceptions apply:
Limited bandwidth: In remote sites, if the wide area network (WAN) link between the remote site and the hub site is limited, you can use universal group membership caching in the remote site to accommodate the logon needs of users in the site.
Infrastructure operations master role incompatibility: Do not place the global catalog on a domain controller that hosts the infrastructure operations master role in the domain unless all domain controllers in the domain are global catalog servers or the forest has only one domain.
xt in here…