Wednesday, December 30, 2015

Moving User SQL Databases Using PowerShell

I grew tired of manually moving databases around using a combination of SQL and "Copy / Paste" so wrote out a bit of PowerShell to save me some time and effort.
Notice that I am using the copy-item then deleting the object, not just moving the item. This is because of how permissions on objects are handled with copy vs move, plus I am paranoid about not having my original database handy if the move fails, or the moved database gets corrupted in transit.

Let's take a look at the code

In the first section we will be setting the variables.
The next step is to get the database information:
Once we have the database information, the database will need to be taken OFFLINE: Once the database is offline, we can copy the file, set ACLs, and update the database with the new .mdf and .ldf file locations: Then we can bring the DB back ONLINE Once the DB is back ONLINE, wait for 10 seconds and delete the original database: Here is a look at the code once it is all put together:
Updates
01/01/2016: Fixed issues with ACL for moved files by converting file location to UTC based path format.
01/02/2016: Updated to include snippets and comments
01/05/2016: Major update to fix $destination to UTC path, added copyItem function, item extension switch, updated outputs with write-output and write-verbose.

Monday, August 24, 2015

Manually Download and Install the Prerequisites for SharePoint 2016

At some point within your career of deploying SharePoint, you will hopefully come across a scenario where your SharePoint servers are not allowed internet access. Most of the server farms that I work on are not allowed access to the Internet or are fire-walled/ruled out of the ability to surf the web or the ability to download items directly. This brings me to the need to be able to download the required files to a specific location and use that location for SharePoint's PrerequisiteInstaller.exe to complete its installation.
If you need to do an offline installation of SharePoint 2016, you will need to have the prerequisite files downloaded ahead of time. You will also need the SharePoint 2016 .iso (download here). These scripts are based off of the scripts provided by Craig Lussier (@craiglussier).
These scripts will work for SharePoint 2016 on Windows Server 2012R2 or on Windows Server 2016.
The only change that needs to be made is the location of the SharePoint prerequisiteinstaller.exe file in script #3 line #3. So, update the $sp2016Location variable before running. If the SP2016 .iso is mounted to the "D:\" drive, you have nothing to change and, in theory, this should just work for you out of the box.
The first thing that I like to do is to add the windows features:
The second step is to download the items required for the prerequisite installer:
The next step is to run the prerequisite installer. There is a requirement to restart the server during the installation and provisioning of settings:
The final step is to continue the installation of the prerequisites:
I hope that this saves you some time and headaches trying to get SP2016 installed and running correctly.

Updates

08/24/2015 Added ability to install on either Technical Preview for Server 2016 or Server 2012R2
08/25/2015 Added verbiage on updating $sp2016Location variable, and added workaround for PowerShell bug in download script.
11/25/2015 Updated for installation of SharePoint 2016 Beta 2 bits.
11/14/2017 Updated the scripts to install SharePoint 2016 on either Server 2012R2 or Server 2016.
11/29/2017 Updated the scripts to install SharePoint 2016 on Server 2016 Standard or Datacenter.
Thank you Matthew Bramer for fixing this oversight.

Monday, July 20, 2015

Copying BLOBs Between Azure Storage Containers

In the past when I needed to move BLOBs between Azure Containers I would use a script that I put together based off of Michael Washam's blog post Copying VHDs (Blobs) between Storage Accounts. However with my latest project, I actually needed to move several BLOBs from the Azure Commercial Cloud to the Azure Government Cloud.
Right off the bat, the first problem is that the endpoint for the Government Cloud is not the default endpoint when using PowerShell cmdlets. So after spending some time updating my script to work in Commercial or Government Azure, I was still not able to move anything. So after a bit of "this worked before, why are you not working now?" frustration, it was time for plan B.

Plan B

Luckily enough the Windows Azure Storage Team had put together a command line utility called AzCopy. AzCopy is a very powerful tool as it will allow you to copy items from a machine on your local network into Azure. It will also copy items from one Azure tenant to another Azure tenant. The problem that I ran into is that the copy is synchronous, meaning that it copies one item at a time, and you cannot start another copy until the previous operation has finished. I also ran the command line in ISE vs directly in the command line, which was not as nice. In the AzCopy command line utility, a status is displayed letting you know elapsed time and when the copy has completed. In ISE, you know your BLOB is copied when script has completed running. You can read up on and download AzCopy from Getting Started with the AzCopy Command-Line Utility. This is the script that I used to move BLOBs between the Azure Commercial Tenant and the Azure Government Tenant.
$sourceContainer = "https://commercialsharepoint.blob.core.windows.net/images"
$sourceKey = "insert your key here"
$destinationContainer = "https://governmentsharepoint.blob.core.usgovcloudapi.net/images"
$destinationKey = "insert your key here"
$file1 = "Server2012R2-Standard-OWA.vhd"
$file2 = "Server2012R2-Standard-SP2013.vhd"
$file3 = "Server2012R2-Standard-SQL2014-Enterprise.vhd"
$files = @($file1,$file2,$file3)
function copyFiles {
    foreach ($file in $files) {
        & 'C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe' /Source:$sourceContainer /Dest:$destinationContainer /SourceKey:$sourceKey /DestKey:$destinationKey /Pattern:$file 
    }
}
function copyAllFiles {
    & 'C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy\AzCopy.exe' /Source:$sourceContainer /Dest:$destinationContainer /SourceKey:$sourceKey /DestKey:$destinationKey /S
}
# copyFiles
# copyAllFiles
While I was waiting for my BLOBs to copy over, I decided to look back at my Plan A,and see if I could figure out my issue(s).

Plan A

After cleaning up my script and taking a bit of a "Type-A personality" look at the script, I noticed that i was grabbing the Azure Container Object, but not grabbing the BLOB Object before copying the item. Once I piped the container to the BLOB before copying, it all worked as expected. Below is my script, but please notice that on the Start-AzureStorageBlobCopy cmdlet, I am using the -Force parameter to overwrite the existing destination BLOB if it exists.
# Source Storage Information
$srcStorageAccount = "commercialsharepoint"
$srcContainer = "images"
$srcStorageKey = "insert your key here"
$srcEndpoint = "core.windows.net"
# Destination Storage Information
$destStorageAccount  = "governmentsharepoint"  
$destContainer = "images"
$destStorageKey = "insert your key here"
$destEndpoint = "core.usgovcloudapi.net" 
# Individual File Names (if required)
$file1 = "Server2012R2-Standard-OWA.vhd"
$file2 = "Server2012R2-Standard-SP2013.vhd"
$file3 = "Server2012R2-Standard-SQL2014-Enterprise.vhd"
# Create file name array
$files = @($file1, $file2, $file3)
# Create blobs array
$blobStatus = @()
### Create the source storage account context ### 
$srcContext = New-AzureStorageContext   -StorageAccountName $srcStorageAccount `
                                        -StorageAccountKey $srcStorageKey `
                                        -Endpoint $srcEndpoint 
### Create the destination storage account context ### 
$destContext = New-AzureStorageContext  -StorageAccountName $destStorageAccount `
                                        -StorageAccountKey $destStorageKey `
                                        -Endpoint $destEndpoint
#region Copy Specific Files in Container
    function copyFiles {
        $i = 0
        foreach ($file in $files) {
            $files[$i] = Get-AzureStorageContainer -Name $srcContainer -Context $srcContext | 
                         Get-AzureStorageBlob -Blob $file | 
                         Start-AzureStorageBlobCopy -DestContainer $destContainer -DestContext $destContext -DestBlob $file -ConcurrentTaskCount 512 -Force 
            $i++ 
        }  
        getBlobStatus -blobs $files     
    }
#endregion
#region Copy All Files in Container
    function copyAllFiles {
        $destBlobName = $blob.Name
        $blobs = Get-AzureStorageContainer -Name $srcContainer -Context $srcContext | Get-AzureStorageBlob
        $i = 0
        foreach ($blob in $blobs) {
            $blobs[$i] =  Get-AzureStorageContainer -Name $srcContainer -Context $srcContext | 
                          Get-AzureStorageBlob -Blob $blob.Name | 
                          Start-AzureStorageBlobCopy -DestContainer $destContainer -DestContext $destContext -DestBlob $destBlobName -ConcurrentTaskCount 512 -Force
            $i ++
        }  
        getBlobStatus -blobs $blobs 
    }
#endregion
#region Get Blob Copy Status
    function getBlobStatus($blobs) {
        $completed = $false
        While($completed -ne $true){
            foreach ($blob in $blobs) {
                $counter = 0
                $status = $blob | Get-AzureStorageBlobCopyState
                Write-Host($blob.Name + " has a status of: "+ $status.status) 
                if ($status.status -ne "Success") {
                    $counter ++
                }
                if ($counter -eq 0) {
                    $completed = $true
                }   
                ELSE {
                    Write-Host("Waiting 30 seconds...")
                    Start-Sleep -Seconds 30
                }                         
            }
        }
    }        
#endregion
# copyFiles
# copyAllFiles

Conclusion

Having more than one way to get something accomplished within Azure if fantastic. There is not a lot of documentation out there on how to work with Azure and PowerShell within the Government Cloud, so hopefully this will make life easier for someone. Remember that these scripts can be used across any tenant, Commercial and Government and On-Premises.

Updates

08/05/2015 Fixed cut and paste variable issues and added $destBlobName for renaming BLOBs at the destination location, and updated BLOB status check wait time.

Thursday, July 2, 2015

Provisioning SQL Server Always-On Without Rights

Separation of roles, duties, and responsibilities in a larger corporate/government environment is a good thing. It is a good thing unless you are actually trying to get something accomplished quickly on your own. But this is why there is a separation of roles, so that one person cannot simply go and add objects into Active Directory on a whim, or play with the F5 because they watched a video on YouTube. I recently had designed a solution that was going to take advantage of SQL Server 2012 High Availability and Disaster Recovery Always-On Group Listeners. The problem was that I was not a domain admin, and did not have rights to create a computer object for the Server OS Windows Cluster, or the SQL Group Listener.

Creating the OS Cluster

Creating the OS Cluster was the easy part, I just needed to find an administrator that had the rights to create a computer object in the domain. Once that was accomplished, I made sure that the user had local admin rights on all of the soon-to-be clustered machines, and had them run the following script:
$node1 = "Node-01.contoso.local"
$node2 = "Node-02.contoso.local"
$osClusternName = "THESPSQLCLUSTER"
$osClusterIP = "192.168.1.11"
# $ignoreAddress = "172.20.0.0/21"
$nodes = ($node1, $node2)
Import-Module FailoverClusters
function testCluster {
    # Test Cluster
    $test = Test-Cluster -Node (foreach{$nodes})
    $testPath = $env:HOMEPATH + "\AppData\Local\Temp\" + $test.Name.ToString()
    # View Report
    $IE=new-object -com internetexplorer.application
    $IE.navigate2($testPath)
    $IE.visible=$true
}
function buildCluster {
    # Build Cluster
    $new = New-Cluster -Name $osClusternName -Node (foreach{$nodes}) -StaticAddress $osClusterIP -NoStorage # -IgnoreNetwork $ignoreAddress
    Get-Cluster | Select *
    # View Report
    $newPath = "C:\Windows\cluster\Reports\" + $new.Name.ToString()
    $IE=new-object -com internetexplorer.application
    $IE.navigate2($newPath)
    $IE.visible=$true
}
# un-comment what you what to do...
# testCluster
buildCluster

Creating the Group Listener

Creating the Group Listener was a bit more challenging, but not too bad. Once the OS Cluster computer object was created (thespsqlcluster.contoso.local), the newly created computer object needed to be given rights as well.
- The cluster identity 'thespsqlcluster' needs Create Computer Objects permissions. By default all computer objects are created in the same container as the cluster identity 'thespsqlcluster'.
- If there is an existing computer object, verify the Cluster Identity 'thespsqlcluster' has 'Full Control' permission to that computer object using the Active Directory Users and Computers tool.
You will also want to make sure that the quota for computer objects for 'thespsqlcluster' has not been reached.
The domain administrator was also given Sysadmin rights to all of the SQL Server instances in the cluster.
After all the permissions were set, the Domain admin could run the following script on the Primary SQL Instance to create the Group Listener:


Import-Module ServerManager -EA 0
Import-Module SQLPS -DisableNameChecking -EA 0
$listenerName = "LSN-TheSPDatabases"
$server = $env:COMPUTERNAME
$path = "SQLSERVER:\sql\$server\default\availabilitygroups\"
$groups = Get-ChildItem -Path $path
$groupPath = $path + $groups[0].Name
$groupPath
New-SqlAvailabilityGroupListener `
    -Name $listenerName `
    -StaticIp "192.168.1.12/255.255.255.0" `
    -Port "1433" `
    -Path $groupPath 

Important

After the group listener is created, all the rights that were put in place can once again be removed with the understanding that if you wish to add another listener at another time, the permissions will have to be reinstated temporarily once again. In my case, once all of the computer objects were created successfully, all rights were removed off the cluster computer object and the domain administrator was removed from SQL.

Updates

07/06/2015 Cleaned up diction and grammar, added the Important section.
10/21/2015 Updated computer object permission requirements

Sunday, June 28, 2015

SharePoint and FIPS Exceptions

A couple of weeks ago, I started a "Greenfield" implementation of SharePoint 2013 for a client. This organization has SharePoint 2003, 2007, 2010 already existing in their environment, so I ignorantly figured that the installation should go pretty smoothly.
All of the SharePoint and SQL bits installed correctly, however when trying to provision Central Administration, I ran into an issue where I was not able to create the config database:

What is FIPS?

FIPS stands for the Federal Information Processing Standards, and is used for the standardization of information, such as FIPS 10-4 for Country Codes or FIPS 5-2 for State Codes. However my problem is with FIPS 140-2, the Security Requirements for Cryptography which states:
This Federal Information Processing Standard (140-2) specifies the security requirements that will be satisfied by a cryptographic module, providing four increasing, qualitative levels intended to cover a wide range of potential applications and environments. The areas covered, related to the secure design and implementation of a cryptographic module, include specification; ports and interfaces; roles, services, and authentication; finite state model; physical security; operational environment; cryptographic key management; electromagnetic interference/electromagnetic compatibility (EMI/EMC); self-tests; design assurance; and mitigation of other attacks. [Supersedes FIPS 140-1 (January 11, 1994): http://www.nist.gov/manuscript-publication-search.cfm?pub_id=917970]
In essence, FIPS 140-2 is a standard that can be tested against and certified so that the server is hardened up to a government standard. Currently, the US is not the only government that uses the FIPS standard for server hardening. The FIPS Local/Group Security Policy Flag can be found here:

FIPS and SharePoint

There are a couple of problems with using SharePoint on a FIPS enabled server. SharePoint Server uses MD5 for computing hash values (not for security purposes) which is an unapproved algorithm. According to Microsoft (https://technet.microsoft.com/en-us/library/cc750357.aspx) Schannel Security Package is forced to negotiate sessions using TLS1.0. And the following supported Cipher Suites are disabled:

  • TLS_RSA_WITH_RC4_128_SHA
  • TLS_RSA_WITH_RC4_128_MD5
  • SSL_CK_RC4_128_WITH_MD5
  • SSL_CK_DES_192_EDE3_CBC_WITH_MD5
  • TLS_RSA_WITH_NULL_MD5
  • TLS_RSA_WITH_NULL_SHA
If you want to read up more, here are some good posts:

What's Next?

Disabling FIPS is easy, however a larger discussion needs to be had. Is FIPS set at the GPO level or is it part of the image that was provisioned and FIPS was enabled by default? Will the security team come after you if you disable it without their knowledge? Why do they have FIPS enabled, and what are they trying to accomplish with FIPS? All of these questions will need to be answered before changing your server settings.

Fixing FIPS with PowerShell

This is how I reset the FIPS Algorithm Policy so that I could get Central Administration provisioned. Remember that FIPS will need to be disabled on all of your SharePoint Servers.

$sets = @("CurrentControlSet","ControlSet001","ControlSet002")
foreach ($set in $sets) {
    $path = "HKLM:\SYSTEM\$set\Control\LSA\FipsAlgorithmPolicy"
    if ((Get-ItemProperty -Path $path).Enabled -ne 0) {
        Set-ItemProperty -Path $path -Name "Enabled" -Value "0"
        Write-Host("Set $path Enabled to 0")
    }
}

Thursday, April 16, 2015

Binding SSL Certificates to SNI Enabled Host Headers with PowerShell

It is funny how some things are just easier to use a GUI to deploy versus trying to figure out a problem in code or PowerShell. That is until you run into a client that says you cannot use a GUI within their environment. With SharePoint 2013, having a single Web Application (IIS site) in IIS for Host Named Site Collections (HNSC), it is necessary to use a combination of host names and Server Name Identification (SNI) to route the incoming requests correctly.
You would think that with the ability to create a New-WebBinding in PowerShell, that you would have the ability to set all of your binding properties in one cmdlet, but you would be wrong.
If you run the Get-Binding cmdlet for your site, will will see that there are properties for the certificate thumbprint (certificateHash) and certificate store location.
Looking at the Set-Binding cmdlet, there is the ability to set PropertyNames and Values, but they will not allow you to update the thumbprint or certificate store location...
So, how do we set these values through PowerShell:
Import-Module WebAdministration -EA 0
# Prerequisites
# SAN Certificate must already be installed
# ALL Host Headers (DNS Names) must be in the Subject Alternative Name
# Variables
$site = "pcDemo 2013 Hosting Web App"      # IIS Site Name to bind certificates to
$certFriendlyName = "san.pcDemo.net"       # Certificate Friendly Name
$caHostHeader = "ca.pcDemo.net"            # Central Admin Host Header
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
$dnsNames = $cert.DnsNameList | Select Unicode
foreach ($dnsName in $dnsNames) {
    $hostHeader = $dnsName.Unicode.ToString()
    if ($hostHeader -ne $caHostHeader) {
        Write-Host("Creating $hostHeader binding...")
        # Create the Binding
        New-WebBinding -Name $site -SslFlags 1 -HostHeader $hostHeader -Protocol https -Port 443 -IPAddress "*"
        Write-Host("Binding " + $cert.FriendlyName + " to $hostHeader...")
        # Add the Certificate
        New-Item -Path "IIS:\SslBindings\*!443!$hostHeader" -Thumbprint $thumbprint -SSLFlags 1
   }
}
Now, you are not truly done yet. Currently the site does not have a default SSL site as all of the bindings created so far all have Host Headers associated with them, which will create an error within IIS.
For those of you implementing SharePoint, it is now time to bind your App Domain. This is the binding that will accept all incoming requests that do not have an SNI associated with them.
Import-Module WebAdministration -EA 0
# Prerequisites
# SAN Certificate must already be installed
# ALL Host Headers must be in the Subject Alternative Name as DNS Names
# Variables
$site = "pcDemo 2013 Hosting Web App"      # IIS Site Name to bind certificates to
$certFriendlyName = "san.pcDemo.net"       # Certificate Friendly Name
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
# is there already a binding?
$bindings = Get-WebBinding -Name $site -Port 443 -Protocol https
$exists = $bindings | ? {$_.sslFlags -eq 0}
if ($exists) {
    Write-Host("Binding already exists...")
    Remove-WebBinding -Name $site -Port 443 -Protocol https -HostHeader $null
    Remove-Item -Path "IIS:\SslBindings\0.0.0.0!443"
    Write-Host("Binding removed...")
}
# Create new binding with correct certificate
# Create the Binding
New-WebBinding -Name $site -Protocol https -Port 443 -IPAddress "*"
Write-Host("Binding " + $cert.FriendlyName)
# Add the Certificate
New-Item -Path "IIS:\SslBindings\0.0.0.0!443" -Thumbprint $thumbprint
At this point the SharePoint Web Application (IIS site) should have all of its bindings in place. So what could possibly be left? Well, what about the Central Administration (CA) Web Application?
This one will be a combination of the last two scripts because CA will require SNI.
Import-Module WebAdministration -EA 0
# Prerequisites
# SAN Certificate must already be installed
# ALL Host Headers must be in the Subject Alternative Name as DNS Names
# Variables
$site = "SharePoint Central Administration v4"      # IIS Site Name to bind certificates to
$certFriendlyName = "san.pcDemo.net"                # Certificate Friendly Name
$caHostHeader = "ca.pcdemo.net"                     # Central Administration Host Header
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
Write-Host("Creating $caHostHeader binding...")
# Create the Binding
New-WebBinding -Name $site -SslFlags 1 -HostHeader $caHostHeader -Protocol https -Port 443 -IPAddress "*"
Write-Host("Binding " + $cert.FriendlyName + " to $caHostHeader...")
# Add the Certificate
New-Item -Path "IIS:\SslBindings\*!443!$caHostHeader" -Thumbprint $thumbprint -SSLFlags 1
Now to put the entire thing together:
Import-Module WebAdministration -EA 0
# Prerequisites
# SAN Certificate must already be installed
# ALL Host Headers (DNS Names) must be in the Subject Alternative Name
# Variables
$site = "pcDemo 2013 Hosting Web App"         # IIS Site Name for SharePoint Hosting
$sanCertFriendlyName = "san.pcDemo.net"       # SAN Certificate Friendly Name
$appCertFriendlyName = "pcdemo-apps.net"      # App Domain Wild Card Certificate Friendly Name
$caCertFriendlyName = "ca.pcDemo.net"         # Central Administration Certificate FriendlyName
$caHostHeader = "ca.pcdemo.net"               # Central Administration Host Header
#region Create SAN Bindings
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $sanCertFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
$dnsNames = $cert.DnsNameList | Select Unicode
foreach ($dnsName in $dnsNames) {
    $hostHeader = $dnsName.Unicode.ToString()
    if ($hostHeader -ne $caHostHeader) {
        Write-Host("Creating $hostHeader binding...")
        # Create the Binding
        New-WebBinding -Name $site -SslFlags 1 -HostHeader $hostHeader -Protocol https -Port 443 -IPAddress "*"
        Write-Host("Binding " + $cert.FriendlyName + " to $hostHeader...")
        # Add the Certificate
        New-Item -Path "IIS:\SslBindings\*!443!$hostHeader" -Thumbprint $thumbprint -SSLFlags 1
    }
}
#endregion
#region Create Default 443 Bindings
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $appCertFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
# is there already a binding?
$bindings = Get-WebBinding -Name $site -Port 443 -Protocol https
$exists = $bindings | ? {$_.sslFlags -eq 0}
if ($exists) {
    Write-Host("Binding already exists...")
    Remove-WebBinding -Name $site -Port 443 -Protocol https -HostHeader $null
    Remove-Item -Path "IIS:\SslBindings\0.0.0.0!443"
    Write-Host("Binding removed...")
}
# Create new binding with correct certificate
# Create the Binding
New-WebBinding -Name $site -Protocol https -Port 443 -IPAddress "*"
Write-Host("Binding " + $cert.FriendlyName)
# Add the Certificate
New-Item -Path "IIS:\SslBindings\0.0.0.0!443" -Thumbprint $thumbprint
#endregion
#region Create Central Administration Bindings
$site = "SharePoint Central Administration v4"      # IIS Site Name to bind certificates to
# Certificate stuff
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $caCertFriendlyName)}
$thumbprint = $cert.Thumbprint.ToString()
Write-Host("Creating $caHostHeader binding...")
# Create the Binding
New-WebBinding -Name $site -SslFlags 1 -HostHeader $caHostHeader -Protocol https -Port 443 -IPAddress "*"
Write-Host("Binding " + $cert.FriendlyName + " to $caHostHeader...")
# Add the Certificate
New-Item -Path "IIS:\SslBindings\*!443!$caHostHeader" -Thumbprint $thumbprint -SSLFlags 1
#endregion
Updates
04/19/2015 Fixed paragraph 3 from Get-WebBinding to Set-WebBinding
04/21/2015 Added CA certificate checking and added Remove-Item to allow App Domain cert to be added with New-Item

Monday, February 9, 2015

SharePoint 2013 and Web Application Proxy (WAP) Server

The other day I was talking with a friend of mine, Miguel Wood (@MiguelWood) about a SharePoint 2013 farm that I was provisioning in Azure. We were talking about endpoints and direct access into the SharePoint farm, and I realized that I really should have put all of my public facing URLs through a reverse proxy. Luckily, I had a Windows Server 2012R2 WAP server in Azure already, handling all of the ADFS 3.0 authentication traffic, so I updated my DNS entries to point at the WAP Load Balancing URL.
This is where things started to get a bit messy. After updating the URLs and adding the SSL certificates into the WAP server, I was able to hit my public facing URLs. I was very happy to add this added level of security into the environment. My happiness was short lived because when I went and clicked on an application that lived in the App domain, I received a 404 page not found error.
Silly me, I had not set up the WAP server to pass the app domain. After spending way too much time on trying to figure it out, I decided that I had best do a bit of research. I came across the this article in TechNet, https://technet.microsoft.com/en-us/library/dn383655.aspx, that explains why I was not able to pass the app domain successfully.
Well, this is a bit frustrating... So I went back into DNS and pointed the app domain back to the Cloud Service URL for the app domain. Long story, short... I reverted back to creating CNAMEs for all of the public facing URLs to the appropriate Azure Cloud Service.

But Wait There's More!

Luckily there is a new version of the Web Application Proxy coming out wit the new Windows Server OS. It is still in beta, but according to the this Application Proxy Blog post, http://blogs.technet.com/b/applicationproxyblog/archive/2014/10/01/introducing-the-next-version-of-web-application-proxy.aspx

Wildcard domain publishing – new patterns and easier SharePoint 2013 apps publishing

We are adding the ability to publish not only a specific domain name but an entire sub-domain. This opens new opportunities for customer that want to publish sites in bulk and not one by one. For example, if all your apps are under http://*.apps.internal/, you can publish them using a single external domain like https://*.apps.contoso.com/.
This pattern is important when there is a need to publish SharePoint 2013 apps that uses a special sub-domain to all apps. In this case, only wildcard certificates would work as the specific apps domain may change over time.

Ok, that is all good, so lets see how to setup the new WAP server for SharePoint 2013, including the App Domain.

Let's Get Started

If you are already running WAP servers, you will run into issues, unless you remove the older versions first. Ideally you have a development environment to install ADFS and WAP, as you do not want to deploy the technical evaluation bits in your production environment. As just mentioned, if you do add the new WAP server into an older WAP cluster, you will run into this issue:
I recommend standing up a new ADFS server on a seperate VLAN that WAP can connect to, but not actually use the server for ADFS.
The first thing you will need to do is grab the ISO from the TechNet Evaluation Center, http://www.microsoft.com/en-us/evalcenter/evaluate-windows-server-technical-preview
Create a new server for ADFS and a new server for WAP
After you install the OS on the WAP server, make sure that you have at least 2 NIC ports to separate your traffic. You will use one NIC for your incoming Internet traffic and one NIC for your connection to your Internal Network. 
You will want to add the server to your network but NOT to your domain. Do NOT add the server to your DOMAIN.
This WAP server for this demonstration is going to be used solely for a reverse proxy for SharePoint, and not for ADFS. There are plenty of blog posts on how to set-up ADFS 3.0, but you will want to install ADFS off the Technical Preview bits. Here is a simple post that installs ADFS on Server 2012R2, nothing has changed.
You will need your App Domain Wildcard Certificate and your SharePoint SAN certificates with Key.
For this blog post, we will add the WAP server to the network, configure the reverse proxy for SharePoint 2013 hybrid, including the app domain on-premises, and re-point the DNS entries to the new WAP server

Prepare The Server

Ideally you would want to run your WAP server with 2 NIC cards, so at this point, label and set your 2 NIC Cards, and assign your IP Addresses for each. For example, to start, my Network looks like this: 
My external facing NIC looks like this:
My internal facing NIC looks like this:
Adding the default gateway information the DNS information will allow us to not rely on using the HOSTS file and will update our Network Types:
By having the NICs on different network types, you can then close off the ports appropriately.
After setting up your NICs clean up your firewall so that only the required ports each network type are set up correctly. For example, you do not want to be using Remote Desktop from the Internet side of your server. Make sure that you open up port 80 on the Public side of your firewall, but keep it closed on the internal side. 
Next is to enable the WAP server feature.
Add-WindowsFeature -Name web-application-proxy -IncludeManagementTools
If we run:
Get-Command –Module WebApplicationProxy
on bother the old and new WAP server, during the beta phase at least, the available cmdlets are the same.

Import / Export Certificates

We will need to import certificates with that contain private keys. We will be populating the WAP server with the following information:
We have placed all of our certificates in the C:\Certificates\Imports folder. To import them all run the following:
$importFolder = "C:\Certificates\Imports"
$file = (Get-ChildItem -Path $importFolder -Recurse)
$certificate = $file | Import-Certificate -CertStoreLocation  Cert:\LocalMachine\My
$certificate.DnsNameList | Select Unicode 
Now, if your WAP server is in production, you should have a fail-over onto another WAP server. Remember that at this time we are working with the Technical Preview bits and not for Production Use. To export your certificates to move to your next WAP server:
$exportFolder = "C:\Certificates\Exports"
$pw = ConvertTo-SecureString -String "Pass@w0rd#1" -Force -AsPlainText
$certs = Get-ChildItem -Path cert:\LocalMachine\my | Where-Object {$_.HasPrivateKey -eq $true}
foreach ($cert in $certs) {
    $filePath = $exportFolder + "\" + $cert.FriendlyName.ToString() + ".pfx"
    $cert | Export-PfxCertificate -FilePath $filePath -Password $pw -ChainOption BuildChain
} 
To import the .pfx certificates into the next WAP server:
$importFolder = "C:\Certificates\Imports"
$file = (Get-ChildItem -Path $importFolder -Recurse)
$pw = ConvertTo-SecureString -String "Pass@w0rd#1" -Force -AsPlainText
$certificate = $file | Import-PfxCertificate -CertStoreLocation  Cert:\LocalMachine\My -Password $pw -Exportable #If you want to export again
$certificate.DnsNameList | Select Unicode
Now that the certificates are installed, it's time to set-up WAP for the incoming URLs. But first let's first compare the take a look at the Add-WebApplicationProxyApplication cmdlet from Server 2012R2 to the Technical Preview.
The items in Yellow are the new properties at this point. Now the one that is really important to me is the EnableHTTPRedirect. This will allow users to go to http://sp2013.contoso.com and get redirected to https://sp2013.contoso.com automatically without having to open port 80 on your Firewall (Private Network) or on your SharePoint server. Basically this stops us from having to  create a redirect in IIS as described in my blog post How to Redirect from HTTP to HTTPS with URL Rewrite

Update HOSTS File

If you do not put in any DNS server information in your internal NIC settings, you will have to put all of your DNS entries in the HOSTS file. Since our WAP server is not attached to any domain, and does not have access to the internal DNS, you will need to modify the server's HOSTS file so that the redirects can be redirected correctly.
You will find the HOSTS file here(typically): C:\Windows\System32\drivers\etc 
Since your ADFS server should be a demo server as well, you will also want to include the ADFS server address and URL in your HOSTS file. 

Bind to ADFS

The WAP server is meant to be associated with an existing ADFS farm, and to complete the installation of the WAP server, you will need to either run the Installation Wizard or run the Install-WebApplicationProxy cmdlet. If you are not using the GUI, your PowerShell will look something like this:
$certFriendlyName = "sts.domain.net"
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint
$FScredential = Get-Credential
Install-WebApplicationProxy -FederationServiceName "sts.domain.net" -FederationServiceTrustCredential $FScredential -CertificateThumbprint $thumbprint

Publish the SharePoint On-Premises and WAC Web Applications 

The PowerShell for creating the web applications will be the same for your SharePoint needs (not for hybrid) as it will be for your WAC server.
$externalURL = "https://sp2013.pcdemo.net" # Incoming URL
$backendURL = "https://sp2013.pcdemo.net"  # URL for SharePoint WA
$name = "sp2013.pcdemo.net"                # WAC Web Application Name
$certFriendlyName = "san.pcDemo.net"       # Cert Friendly Name
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint
Add-WebApplicationProxyApplication -ExternalPreauthentication PassThrough `
    -ExternalUrl $externalURL `
    -BackendServerUrl $backendURL 
    -name $name `
    -ExternalCertificateThumbprint $thumbprint `
    -ClientCertificatePreauthenticationThumbprint $thumbprint `
    -DisableTranslateUrlInRequestHeaders:$False `
    -DisableTranslateUrlInResponseHeaders:$False `
    -EnableHTTPRedirect:$true
If you want to create Web Applications for all of the DNS names in your SAN certificate you can run this script:
$certFriendlyName = "san.pcDemo.net"       # Cert Friendly Name
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$dnsNames = $cert.DnsNameList | Select Unicode
foreach ($dnsName in $dnsNames) {
    $externalURL = "https://" + $dnsName.Unicode
    $backendURL = "https://" + $dnsName.Unicode
    $name = $dnsName.Unicode
    $thumbprint = $cert.Thumbprint
    Add-WebApplicationProxyApplication -ExternalPreauthentication PassThrough `
        -ExternalUrl $externalURL `
        -BackendServerUrl $backendURL `
        -name $name `
        -ExternalCertificateThumbprint $thumbprint `
        -ClientCertificatePreauthenticationThumbprint $thumbprint `
        -DisableTranslateUrlInRequestHeaders:$False `
        -DisableTranslateUrlInResponseHeaders:$False `
        -EnableHTTPRedirect:$true 
}

Publish the SharePoint Hybrid Web Application

This one is a bit different since the hybrid connection relies on pre-authentication using a certificate that you have uploaded to O365.
$externalURL = "https://hybrid.pcdemo.net"  # Incoming URL
$backendURL = "https://hybrid.pcdemo.net"   # URL for SharePoint WA
$name = "hybrid.pcdemo.net"                 # WAC Web Application Name
$certFriendlyName = "san.pcDemo.net"        # Cert Friendly Name
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint
Add-WebApplicationProxyApplication -ExternalPreauthentication ClientCertificate `
   -ExternalUrl $externalURL `
   -BackendServerUrl $backendURL `
   -name $name `
   -ExternalCertificateThumbprint $thumbprint `
   -ClientCertificatePreauthenticationThumbprint $thumbprint `
   -DisableTranslateUrlInRequestHeaders:$False `
   -DisableTranslateUrlInResponseHeaders:$False

Publish the SharePoint App Domain

There is an issue with not having access to the DNS of your internal netowork, and that would be that the HOSTS file does not accept wildcard entries. To get around this, you can install a tool like Acrylic DNS Proxy http://mayakron.altervista.org/wikibase/show.php?id=AcrylicHome 
To create your wildcard domain Web Application for your App Domain, run the following script.
$externalURL = "https://*.pcdemo-apps.net"  # Incoming URL
$backendURL = "https://*.pcdemo-apps.net"   # URL for SharePoint WA
$name = "*.pcdemo-apps.net"                 # WAC Web Application Name
$certFriendlyName = "pcDemo-apps.net"       # Cert Friendly Name
$cert = Get-ChildItem -path cert:\LocalMachine\My | Where {($_.FriendlyName -eq $certFriendlyName)}
$thumbprint = $cert.Thumbprint
Add-WebApplicationProxyApplication -ExternalPreauthentication PassThrough `
    -ExternalUrl $externalURL `
    -BackendServerUrl $backendURL `
    -name $name `
    -ExternalCertificateThumbprint $thumbprint `
    -ClientCertificatePreauthenticationThumbprint $thumbprint `
    -DisableTranslateUrlInRequestHeaders:$False `
    -DisableTranslateUrlInResponseHeaders:$False `
    -EnableHTTPRedirect:$true 
After you have created all of your Web Applications, if you go into the Remote Access Management Console, your list of Published Web Applications might look similar to this:
There you go! You now have the ability to create all the WAC Web Applications that you will need to allow external users into your SharePoint environment safely. It is easy to edit or delete the Web Applications in the GUI if you make a mistake.
They will even supply the PowerShell for later use.
Once you have ADFS up and running, regardless if you use it or not, the new WAP server is even better than the previous product. Hopefully this post will help get you started on the road to creating a SharePoint Hybrid environment or just creating a reverse proxy server to keep your SharePoint (or any other site) safe.

Test It Out!

Click on the links below and notice that they get redirected from port 80 to port 443.


Monday, January 26, 2015

PowerShell for Microsoft Azure- Creating Storage

This is part 4 of the blog series on getting Started with Microsoft Azure.
Part 1: Microsoft Azure- Getting Started
Part 2: PowerShell for Microsoft Azure- Getting Started
Part 3: Microsoft Azure- Create Geo Redundancy and Virtual Networks
Part 4: PowerShell for Microsoft Azure- Creating Storage (This post)
Part 5: PowerShell for Microsoft Azure- Upload VHD Images (In Progress)
Part 6: PowerShell for Microsoft Azure- Create Machines (Coming soon)
Part 7: Active Directory and DNS in the Cloud and Azure AD (Coming Soon)
-----
So what is Azure Storage? Feel free to read up on the Introduction Azure Storage site to get a good handle on Azure storage offerings. If you are not going to read about it, please know that there is a new offering called Premium Storage, which uses SSDs for low latency IOPS to support I/O intensive systems like SharePoint and SQL. You can read up more at the Azure Premium Storage site.
Think of Azure Storage as a secure container with a unique namespace that holds all of your storage requirements. You can store all kinds of objects such as blobs, tables, and files.

Let's Get Started

Azure is a great tool for building out servers through their GUI, however, I am not a big fan of the cryptic storage containers that Azure provides if you just create a server before creating your storage. 
This cryptic Name is carried down to the URL of the storage account.
Not a very user friendly way to go...

Creating Storage- PowerShell

In the second post of this series, PowerShell for Microsoft Azure- Getting Started, we review just that... how to get started. Now it is time to start using some of your PowerShell skills. Let's say for example, that you wish to upload your own images into a container named, "Images" and you want to have your "Images" container live within your "Contoyso East Network". Unfortunately, storage and container names need be be all lowercase letters and/or numbers, no spaces and no dashes. By default, when you create a storage account, Geo Redundant Storage is automatically enabled. This is perfect if your storage does not require high IOPS. To create a storage account run the following:
$storageAccountName = "contoysoweststorage1"
$containerName = "images"
# Create a new Azure Storage Account, set GEO Replication Type to Local Redundant Storage (Standard_LRS) for heavy IOPS
$storage = New-AzureStorageAccount -StorageAccountName $storageAccountName -Location "West Europe" # -Type Standard_LRS # Local Redundant Storage
# Get Storage Key information
$key = Get-AzureStorageKey -StorageAccountName $storageAccountName
# Get Context
$context = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $key.Primary
# Create the container to store blob
$container = New-AzureStorageContainer -Name $containerName -Context $context
This would create a Storage Account that looks like this:

And the West Europe container would look like this:

Now, let's say that we need to put our data on SSDs as we are running SharePoint in Azure, and we want to optimize our SQL drives. Ideally you would want to put the drives in the new Premium Storage.
$storageAccountName = "contoysoweststorage2"
$containerName = "images"
# Create a new Azure Storage Account
New-AzureStorageAccount -StorageAccountName $storageAccountName -Location "West Europe" -Type Premium_LRS
# Get Storage Key information
$key = Get-AzureStorageKey -StorageAccountName $storageAccountName
# Get Context
$context = New-AzureStorageContext -StorageAccountName $storageAccountName -StorageAccountKey $key.Primary
# Create the container to store blob
$container = New-AzureStorageContainer -Name $containerName -Context $context

Unfortunately, at the time of this blog post, Azure Premium storage is only available in the West US, East US 2, and West Europe. If your Azure account is not set up correctly, you will receive notice that your "subscription is not authorized for feature Premium Storage".

Create Storage for SQL

Just like on-premises storage for SQL, for best optimization of data transfer, you want to put your SQL storage on different LUNs. In Azure, you have the ability to break out your disk storage into different LUNs as well. 
Now, depending on the type of VM tier series that you are using (A0 Standard) will determine the number of drives that you can have attached to your VM. So while architecting your Azure infrastructure/machine, make sure the VMs that you want to use, support the number of disks required. I suggest reading through Virtual Machine and Cloud Service Sizes for Azure before building out your VMs.
Remember that in a Virtual Environment the disk associated to a specific VM is just another .vhd file that does not have an OS installed/assigned to it. So to add a disk to a SQL Server, it is easier to add the data disk after the VM has already been created. We will go through creating the data disk in Part 6: PowerShell for Azure- Creating Virtual Machines.

Warning

Do not put all of your eggs in one basket. There is a potential for a Storage Account to get corrupted, and you could lose everything. Make sure that you have backups set up to a secondary Geo-Redundant Storage Account. If you have the ability to delay the creation of your backup storage account, it will lessen the likelihood of your Storage Accounts being created in the same storage pool within Azure. Here is an example of another storage strategy:
And once again, any time you are building out anything in Azure, you will want to do it through PowerShell:
# Local Redundant Storage Information
$lrsAccountNames = @("contoysostorage1east","contoysostorage2east","contoysobustorage1east","contoysobustorage2east")
$lrsContainerNames = @("images","sql-drives","data-drives")
# Global Redundant Storage Information
$grsAccountNames = @("contoysosqlbus")
$grsContainerNames = @("production-bus","development-bus")
# Location Information
$location = "East US"
# Pause Time before Creating Backup Storage Account in Minutes
$time = 120
# Create Local Redundant Storage
foreach ($lrsAccountName in $lrsAccountNames) {
    # Create a new Azure Storage Account, set GEO Replication Type to Local Redundant Storage (Standard_LRS) for heavy IOPS
    $storage = New-AzureStorageAccount -StorageAccountName $lrsAccountName -Location $location -Type Standard_LRS # Local Redundant Storage
    Write-Host("$lrsAccountName storage account created...")
    # Get Storage Key information
    $key = Get-AzureStorageKey -StorageAccountName $lrsAccountName
    # Get Context
    $context = New-AzureStorageContext -StorageAccountName $lrsAccountName -StorageAccountKey $key.Primary
    foreach ($lrsContainerName in $lrsContainerNames) {
        # Create the container to store blob
        $container = New-AzureStorageContainer -Name $lrsContainerName -Context $context
        Write-Host("$lrsContainerName container created...")
    }
}
# Pause to create backup storage account$counter = 0
do {
    Start-Sleep -Seconds 60
    Write-Host("$counter of $time minutes completed...")
    $counter ++
}
While ($counter -le $time)
# Create Global Redundant Storage
foreach ($grsAccountName in $grsAccountNames) {
    # Create a new Azure Storage Account, set GEO Replication Type to Local Redundant Storage (Standard_LRS) for heavy IOPS
    $storage = New-AzureStorageAccount -StorageAccountName $grsAccountName -Location $location # -Type Standard_LRS # Local Redundant Storage
    Write-Host("$grsAccountName storage account created...")
    # Get Storage Key information
    $key = Get-AzureStorageKey -StorageAccountName $grsAccountName
    # Get Context
    $context = New-AzureStorageContext -StorageAccountName $grsAccountName -StorageAccountKey $key.Primary
    foreach ($grsContainerName in $grsContainerNames) {
        # Create the container to store blob
        $container = New-AzureStorageContainer -Name $grsContainerName -Context $context
        Write-Host("$grsContainerName container created...")
    }
}

If you take a look at the image of the locally redundant storage, you will notice that there are containers called "vhds". The PowerShell script does not create these containers since Azure will create the containers as needed when the Virtual Machines are created.

Conclusion

Managing your storage is very important when it comes to optimizing your IOPS and system latency. It is also very nice was to make sure that you keep your naming schema nice and clean, and using PowerShell to create your storage also helps keep everything user friendly.
Now that the storage infrastructure is in-place, it is time to upload a .vhd into the Images folder. Please see my next post on Uploading VHDs.

Updates

03/31/2015 Added the Warning section about the potential for Storage Account corruption.
40/02/2015 Added image and PowerShell to Warning section



Saturday, January 17, 2015

Microsoft Azure- Create Geo Redundancy and Virtual Networks (VNet to VNet)

This is part 3 of the blog series on getting Started with Microsoft Azure.
Part 1: Microsoft Azure- Getting Started
Part 2: PowerShell for Microsoft Azure- Getting Started
Part 3: Microsoft Azure- Create Geo Redundancy and Virtual Networks (This post)
Part 4: PowerShell for Microsoft Azure- Creating Storage
Part 5: PowerShell for Microsoft Azure- Upload VHD Images (In Progress)
Part 6: PowerShell for Microsoft Azure- Create Machines (Coming soon)
Part 7: Active Directory and DNS in the Cloud and Azure AD (Coming Soon)
----------
At this point we have our Azure Tenant created, and the subscription named appropriately. We also have an idea of what data centers will be hosting Contoyso's data. If you have not looked at the first blog post, this is where we are for our data hosting.

Let's Get Started

To say that Microsoft's data centers are huge, is an understatement, as the data centers are built to host, essentially, shipping containers of racked storage of servers and hardware. The biggest problem moving forward, just like any on-premises data center, is how to minimize latency between servers. To keep servers as close together to reduce latency and increase performance, Azure uses a feature called Affinity Groups. Affinity Groups aggregates the compute and storage services not just to the same data center, but into the same server cluster and was assigned to specific physical pieces of hardware. 
In the beginning, you had to tie your network to an Affinity Group, but this has caused problems within Microsoft, as the Affinity Groups were tied to hardware, and when the hardware needed to be replaced, issues arose. So, Affinity Groups are no longer required to be tied to networks. You can still associate them, but we will not be going over that in this blog post. Also, if you have an existing Affinity Group associated with a network, it will eventually be migrated to a Regional vNET. For more information read About Regional VNets and Affinity Groups for Virtual Network.

Getting Your Local VPN  IP Gateway Address

Your network administrator should be able to give this to you, or if you wish, just Bing it... "What is my ip address" to get your external gateway address. If this is a demo environment and you do not have a fixed IP, this could be a problem because you could drop your Server to Server (S2S) VPN tunnel when you glean a new IP from your provider. If this happens, all you would need to do is update Azure with your new IP address and reconnect... If this is for your production environment and you have a dynamic IP address, you should really get a static IP. 
DOCUMENT YOUR GATEWAY IP ADDRESS

Creating Your Local Networks within Azure

Now is a good time to bring back out Excel and continue with the documentation of the on-premises and Azure infrastructure.

You can download the Excel file from http://1drv.ms/1xvEfcW
The first thing that needs to be accomplished is creating the Local Networks. From the left hand navigation, select  NETWORKS, and then select LOCAL NETWORKS from the top navigation.
Once you are on the LocalNetworks page, click ADD A LOCAL NETWORK. 

This will open a modal window to fill out. Hopefully you have already filled out your table and have all of the required information handy. For you first network, enter you on-premises information, with the VPN Device IP being your IP Address for your corporate gateway.
On the next page, enter your IP Address and CIDR information, and click the check to continue.
To add the next local network, click the NEW button at the bottom of the page and select ADD LOCAL NETWORK.
Create your other two local networks, and when you have completed your local networks, you should have a list of local networks similar to this:
At this point, the virtual network IP ranges have been created, but not all the IP Addresses have been added to the VPN Gateway Addresses for East US and West Europe. Before we can add our VPN Gateway information, the virtual networks will need to be created. The gateways IP addresses will be assigned with the creation of the virtual network.

Creating Virtual Networks within Azure

You can read Microsoft's marketing about virtual networks by going to  http://azure.microsoft.com/en-us/services/virtual-network/. But essentially, an Azure Virtual Network is the infrastructure to build out your cloud network environment within or to expand your on-premises network upon. It is your Infrastructure-as-a-Service (IaaS) and Platform-as-a-Service (PaaS) all rolled into one.

Create Your First Virtual Network

After creating the Local Networks within your Azure Tenant, it is time to create the Virtual Networks. Virtual networks are the networks within the Azure Data Centers. Since the on-premises network is already up and running, only networks that need to be created at this point are the East US and West Europe virtual networks. To get started, select the NETWORKS link in the left navigation followed by the VIRTUAL NETWORKS link in the upper navigation. This brings you to the virtualnetworks page.
Click on the CREATE A VIRTUAL NETWORK link to get started. This will open up a modal window, and fill out the information appropriately (look at the table you created earlier).
On the second page, select the Configure site-to-site VPN and select the on-premises local network. Since Contoyso is planning on extending the network into the cloud, Azure will need to know about the local DNS servers.
In Azure, DNS entries are set in a priority ranking, not in a round-robin. The first entry is the first DNS server queried, so plan appropriately. Later on we will be adding the DNS server information via the NetworkConfig.xml file as well.

On the third page, click the add gateway subnet button.
Click the check mark to finish.
After the first network has been completed, your virtualNetworks page should look like this:
At this point, we need to create our gateway and get the IP address before we create any of our other networks. To do this, click on the appropriate network, in this case the Azure East US network, and select DASHBOARD from the top navigation.
At this point, since a gateway has not been created, click the CREATE GATEWAY button at the bottom of the page.
This will bring up the option for either Dynamic or Static Routing... Select Dynamic routing as static routing is not supported for the Site-to-Site (S2S) VPN tunnel that is about to be created. Select Yes to create the gateway... You will notice that the gateway is now yellow, as it is now (being) created within Azure. The gateway will turn green when the S2S tunnel is not broken.
Gateway creation takes a fair amount of time to create, so go make a pot of coffee and come back in about 30 minutes. When the gateway has been created, add the gateway IP address to the Excel file.
Your dashboard should look like this:
and your Excel table should look like this:
After the gateway has been created, go ahead and click the CONNECT button at the bottom of the page. At this point, we have not created the RRAS connection on-premises so nothing will connect, but the connection has been activated within Azure for the time we get RRAS up and running. The final step is to add the IP address to the vNet-East settings on the localNetworks page, by clicking on LOCAL NETWORKS in the top navigation, selecting the vNet-East network, and clicking the EDIT button.
Enter your IP address on the first page:
On the second page, click the check to finish adding the IP address.

Create Your Second Virtual Network

It is now time to create the second virtual network. From the virtualNetworks page, click NEW to start the process of creating the second virtual network (West Europe).
On the first page, add the appropriate name from your Excel table, and appropriate location of the virtual network.
On the second page, select the Configure a site-to-site VPN, which will add a couple of pieces of information to fill out. For this blog post, we will not be using ExpressRoute, and from the LOCAL NETWORK drop down, select the network that was just created (vNet-East). Don't forget to add the on-premises DNS server information...
The final page will require you to create a gateway subnet, by clicking the add gateway subnet button before you finish creating your new virtual network.
And, just as before, it is time to create the gateway and enable the connection.
Click the new virtual network name, click on DASHBOARD from the top navigation, click the CREATE GATEWAY on the bottom of the page, and select Dynamic Routing. Then select Yes to create the gateway, and once again, find something else to do for the next 20-30 minutes...
After everything is created, click the CONNECT link at the bottom of the page. At this time the dashboard page looks like this: 
Grab the gateway IP address and add it to your Excel Table. Your Excel table should now be complete.
Now that all the gateway addresses have been created, update the  LOCAL NETWORK VPN Gateway Addresses.
In the big scheme of things this is where we are:

Add the Local Networks VPN Gateway Addresses

Click on the LOCAL NETWORKS link at the top of the Networks page. Select the East Network and click EDIT to add the East Network's Gateway Address. 
On the second page, click the check box to continue. Then, following the same procedures, add the Gateway IP address for the West Network. When you are finished with adding the Gateway IP information, your localNetworks page should look similar to this:

Connect the Networks

Now, this is where things get a bit not so fun. At this time, it is only possible to create one connection per gateway endpoint through the Azure GUI. So to get around the limitation, we will export a copy of the network as an XML file, modify it, and upload the modified file back into Azure to complete all of our network connections.
To start, go to the NETWORKS page and at the bottom of the page, click the EXPORT link. 
This will open a modal window that will ask for the subscription you wish to export (see my blog post: Microsoft Azure- Getting Started to create a friendly subscription name). Once you click the check box to continue, a download dialog box will open. Save the NetworkConfig.xml document someplace handy then make a copy of it... just in case you blow up your network.
You can also download the file via PowerShell.
Get-AzureVNetConfig -ExportToFile c:\scripts\vNetConfig.xml
Remember what networks connect to what? The East network connects to DC and West networks, while the West network connects to DC and East networks.
At this point we are going to crack open Visual Studio. Remember that we installed it in the last blog post, along with the Azure module for PowerShell. We reviewed the procedures to get both applications in the previous blog post,  PowerShell for Microsoft Azure- Getting Started.
Open the the NetworkConfig.xml in your favorite XML editor or Visual Studio.
Within the VirtualNetworkSites schema, we are interested in modifying the ConnectionsToLocalNetwork sections by adding the appropriate reference to the LocalNetworkSiteRef.
At this point, it is time to add the other Local Network to the gateway... Making sure that East connects to West and DC while West connects to East and DC.
While you have the XML file open, let's look at the DNS entries as well. As discusses earlier, that if you wish to have the Azure servers know about the on-premises servers, Azure will need to know about your on-premises DNS. First, look for the DNS server(s), and them make sure the the server names and IP addresses are correct.
At the top of the XML file, you can see that AD-02 (DNS server) is available within the VirtualNetworkConfiguration. Next, make sure that the DNS server(s) reference is associated with each network. For example, make sure that each virtual network has the AD-02 reference. If at a later time, you need to add more or edit the DNS servers, you will need to modify the XML. 
Make sure you save your file, before you try import it back into Azure. 
Now that the DNS entries have been verified, and the network is configured correctly, it is time to upload the file back into Azure. To upload the file using PowerShell, 
Set-AzureVNetConfig -ConfigurationPath c:\scripts\vNetConfig.xml
To use the GUI, click the new button at the bottom of the page and select NETWORK SERVICES --> VIRTUAL NETWORK --> IMPORT CONFIGURATION
and import your NetworkConfig.xml file. On the next page, Azure will show you the changes that will be happening to your network so that you can validate the changes.
This page shows what is to be created after the original networks get deleted.
Above, are the original networks that need to be deleted, before the networks can be recreated.
After the new networks have been created, click into the Dashboard page of one of the Networks and see that Gateway connections are still disconnected. This is because the Gateways have private keys, and the keys need to be exchanged to start the connections. To set the keys, you need to open PowerShell and log-in to your tenant. If you do not know how to use PowerShell with Azure, please review my previous blog post PowerShell for Microsoft Azure- Getting Started.
We will need to create two sets of matching keys for the East and West Virtual Networks to the appropriate local network gateways. 
$virtualNetworkEast = "Azure East US"
$virtualNetworkWest = "Azure West Europe"
$localNetworkEast = "vNet-East"
$localNetworkWest = "vNet-West"
$localNetworkOnPrem = "Contoyso-DC"
# Set key for East to West
Set-AzureVNetGatewayKey -VNetName  $virtualNetworkEast -LocalNetworkSiteName $localNetworkWest -SharedKey "lxJeyvLdcgPutInYourOwnKeys1Aaksw1SplbnyK7YH"
# Set key for East to On-Premises
Set-AzureVNetGatewayKey -VNetName $virtualNetworkEast -LocalNetworkSiteName $localNetworkOnPrem -SharedKey "lxJeyvLdcgPutInYourOwnKeys1Aaksw1SplbnyK7YH"
# Set key for West to East
Set-AzureVNetGatewayKey -VNetName $virtualNetworkWest -LocalNetworkSiteName $localNetworkEast  -SharedKey "lxJeyvLdcgPutInYourOwnKeys1Aaksw1SplbnyK7YH"
# Set key for West to On-Premises
Set-AzureVNetGatewayKey -VNetName $virtualNetworkWest -LocalNetworkSiteName $localNetworkOnPrem -SharedKey "lxJeyvLdcgPutInYourOwnKeys1Aaksw1SplbnyK7YH"

Create Your Server to Server VPN Tunnel

After a couple of minutes your Azure East and West local networks will have connected. Now the final, and probably the easiest, thing to accomplish is to get the tunnel established from on-premises to your East and West Gateways.
Luckily for us, Azure has made that very simple for us. If you go to the Dashboard of your East Network, on the right hand side is a link labeled Download VPN Device Script.
When you click the Download link, a modal window will pop up to select the correct type of configuration script that will need to be run on-premises. For this post we will be using Microsoft's RRAS on Server 2012R2.
Clicking the check box will pop up an Open or Save dialog window. From the Save menu, select Save as, and save the file to the c:\scripts folder. After the download has been completed, rename the file to East-S2S-VPN.ps1
Once again, please open up PowerShell ISE as administrator, and open the newly downloaded file East-S2S-VPN.ps1
If you run the file as is, the required roles and features will automatically be installed. The only thing that I am not happy about with this script is that the RRAS connection name uses the IP address, not the Network name for identification. So at this point, we will modify the script to make the connection name match the Network Name.
At the bottom of line 70, is the section to install the S2S VPN connection. Within this section of code, we will be replacing the name of your gateway IP address with the name of Network. Do not do a find and replace all, as we still need the IP address to connect to the gateway.
  • In the Add-VpnS2SInterface replace the -Name parameter value with the name of your Network.
  • In the Set-VpnS2Sinterface replace the -Name parameter value with the name of your Network.
  • In both Set-PrivateProfileString locations replace the IP Address with the name of your Network.
  • In the Connect-VpnS2SInterface replace the -Name parameter value with the name of your Network.

You will want to run the entire script on your RRAS server. I have never been able to run the entire script at once successfully. I suggest running the top 69 lines to install the RRAS roles and features first then running the configuration lines of the script one at a time.
After running the script on the RRAS server, go to you Routing and Remote Access Manager, to see if the VPN has connected yet. It could take several minutes to connect, and don't forget to refresh the Network Interfaces page or you could be waiting a lot longer. Your Network Interfaces page should look similar to this:
Repeat the same process for the West network. 
After the West network has been created your Network Interfaces page should look similar to this:
Now, if you go back into your Azure Networks page and go to the DASHBOARD page of your East Network, it should look similar to this:
While still on your DASHBOARD page, select the West Network, and it too should say connected with a couple of green checks:

Conclusion

At this point, you should now be able to have access to an East and a West Azure data center from your on-premises network. We have completed our goal to create a network that looks like this:

What's Next?

The next goal is to add Active Directory and DNS into the cloud. There is a bit of work that needs to be done to get there, such as setting up storage and cloud services. So stay tuned for the next blog post PowerShell for Microsoft Azure- Creating Storage (in progress)
-----
This is part 3 of the blog series on getting Started with Microsoft Azure.
Part 1: Microsoft Azure- Getting Started
Part 2: PowerShell for Microsoft Azure- Getting Started
Part 3: Microsoft Azure- Create Geo Redundancy and Virtual Networks (This post)

UPDATES
01/19/2015 Removed section on creating Affinity Groups, and added information on why they are not needed any longer. Also added the information for inputting the on-premises DNS information.